The web in a box - a next generation web framework for the Perl programming language

IRC log for #mojo, 2015-09-26

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 mattastrophe joined #mojo
00:28 kanishka joined #mojo
00:59 bpmedley joined #mojo
01:14 disputin joined #mojo
01:46 kanishka hai
02:11 good_news_everyon joined #mojo
02:11 good_news_everyon [mojo] kraih pushed 1 new commit to master: http://git.io/vnxi1
02:11 good_news_everyon mojo/master 09579d3 Sebastian Riedel: use a slightly different example for uniq method
02:11 good_news_everyon left #mojo
02:26 ZoffixMobile joined #mojo
02:40 noganex_ joined #mojo
02:56 SmokeMachine hi!
02:57 SmokeMachine a plugin can have a static file from __DATA__?
03:07 bpmedley SmokeMachine: https://bitbucket.org/snippets/bpmedley/RoaB7 <-- Looks like it
03:09 SmokeMachine bpmedley: that isn't working for me...
03:10 bpmedley I added another file.  Would you reload?  How does your output differ from the new file?
03:11 pink_mist bpmedley: why is there a __END__ after the __DATA__? that's useless? 0_o
03:12 bpmedley pink_mist: That's where the POD was.  I removed it for brevity.
03:14 SmokeMachine bpmedley: that's very different...
03:14 SmokeMachine but the __DATA__ is there and the register() also...
03:16 bpmedley I updated to include some example POD.
03:22 SmokeMachine https://www.irccloud.com/pastebin/m3f5vAZG/bpmedley%3A%20even%20this%20isn't%20working...
03:22 SmokeMachine bpmedley: ^
03:23 bpmedley Would you try "@@ filename.ext" <-- note the space
03:26 pink_mist https://metacpan.org/source/SRI/Mojolicious-6.21/lib/Mojo/Loader.pm#L75 <-- space should be optional, see the \s*
03:27 bpmedley That's cool.  I didn't know that.  SmokeMachine , may we see the code that uses your plugin?
03:30 SmokeMachine bpmedley: my last pastebin isn't working either...
03:31 pink_mist SmokeMachine: that's why he asked to see the code that uses it
03:31 pink_mist SmokeMachine: so we could determine how you're using it, and glean /why/ it isn't working
03:33 SmokeMachine bpmedley, pink_mist:  oh! sorry! that's here! (my mistake) https://www.irccloud.com/pastebin/JoETaXFG/
03:33 pink_mist SmokeMachine: that doesn't try to render using data_section() like bpmedley showed you
03:34 SmokeMachine pink_mist: I try to access it from browser...
03:34 pink_mist SmokeMachine: you need the data_section call to read from a __DATA__ section that isn't added to the $renderer->classes array
03:34 SmokeMachine http://127.0.0.1:3000/bla.js
03:34 pink_mist that's not how it works
03:34 bpmedley SmokeMachine: Just a sec!
03:35 SmokeMachine bpmedley: even 2 secs... :P
03:37 bpmedley SmokeMachine: https://bitbucket.org/snippets/bpmedley/89549   <-- Try http://127.0.0.1/joy.txt
03:37 SmokeMachine pink_mist: but how should I make the server serve that "file"?
03:37 bpmedley http://127.0.0.1:3000/joy.txt, even
03:41 bpmedley SmokeMachine: Can you launch the Mojolicious::Lite app via "daemon" and access joy.txt utilizing your browser?
03:42 SmokeMachine bpmedley: no, I couldn't... I could swear that this would work... :(
03:42 SmokeMachine I couldn't access the file...
03:43 bpmedley perl joy.pl get /joy.txt <-- Does this work?
03:44 SmokeMachine bpmedley https://www.irccloud.com/pastebin/lQv4RYkF/
03:44 bpmedley perl -e ?
03:44 bpmedley Are you saving the code to a real file?
03:45 SmokeMachine no, running with -e...
03:46 SmokeMachine bpmedley: but that's the same thing...
03:46 bpmedley https://bitbucket.org/snippets/bpmedley/MA67d <-- Pls try this way
03:48 bpmedley SmokeMachine: Doesn't -e only enter one line of program?
03:48 SmokeMachine bpmedley: no...
03:48 SmokeMachine it "evaluates"...
03:48 bpmedley Would you try the <<< command?
03:48 SmokeMachine the same result...
03:49 bpmedley Hrmm.  Would you try saving to a file?
03:49 SmokeMachine I'll do that, but I believe that it's not that...
03:49 bpmedley Thanks for being patient.
03:51 SmokeMachine bpmedley: I have to thank you!
03:51 bpmedley Awesome!
03:51 SmokeMachine bpmedley: belive or not that worked! :(
03:51 bpmedley I'm not sure why <<< didn't work.  Was there "-" in the command before daemon?
03:55 jberger anyone played ARK before?
03:55 bpmedley So, about my earlier Minion communiqué.  I was presenting a proposition for a future idea.  I should have been clear from the start on that point.  Perhaps tomorrow, or some other day, I can try to articulate myself again.
03:55 jberger I'm just trying it
03:55 bpmedley I just live there.
04:00 bpmedley SmokeMachine: Do you use -e '' construct for efficiency?  I assume it works well for just about everything except __DATA__ stuff?
04:01 SmokeMachine bpmedley: every test I want to do I use the -e... its easyer than writing a file...
04:01 bpmedley That's super sweet.  I've never thought to do that.
04:14 jberger the commands system and -Mojo do most things for you
04:14 jberger except __DATA__
04:14 jberger which I realize was the focus of this experiment
04:14 jberger :-P
06:09 irqq joined #mojo
06:36 asm35 joined #mojo
07:34 Vandal joined #mojo
08:01 bpmedley_ joined #mojo
08:23 dod joined #mojo
08:29 dod joined #mojo
08:39 amon joined #mojo
08:51 cpan_mojo Mojolicious-Plugin-BModel-0.08 by BCDE https://metacpan.org/release/BCDE/Mojolicious-Plugin-BModel-0.08
09:16 sh4 joined #mojo
09:34 asm35 joined #mojo
10:05 asarch joined #mojo
10:12 panshin joined #mojo
10:18 panshin_ joined #mojo
10:30 meshl joined #mojo
10:40 Craftsmanship joined #mojo
10:55 Craftsmanship so, should I be using Mojo::Asset for my misc-css-framework.css and my jquery.js stuff?
10:55 Craftsmanship I assume the idea is that I can ask it for urls to non-code based stuff
10:56 Craftsmanship (so in the future when my little app is a huge success I can move all my stuff to a cookie-less domain and/or cdns)
11:19 trone joined #mojo
11:53 Zoffix Craftsmanship, use this: https://metacpan.org/pod/Mojolicious::Plugin::AssetPack
11:54 Zoffix What's misc CSS framework? There might be a plugin for it already. Like https://metacpan.org/pod/Mojolicious::Plugin::Bootstrap3
12:27 meshl joined #mojo
12:31 sri SmokeMachine, bpmedley: there is literally a recipe for that http://mojolicio.us/perldoc/Mojolicious/Guides/Rendering#Bundling-assets-with-plugins
12:34 jberger sri: jinx
12:42 Craftsmanship Zoffix: literally anything except bootstrap, purecss in this case
12:45 Zoffix Craftsmanship, hehe :) Well, AssetPack plugin will work well for you, I think
12:48 Craftsmanship I see.
12:49 mattastrophe joined #mojo
13:03 panshin joined #mojo
13:19 absolut_todd joined #mojo
13:30 bpmedley sri: I believe the issue was how SmokeMachine was running the app, not the app itself.
13:36 sri bpmedley: the plugin you made couldn't work https://bitbucket.org/snippets/bpmedley/RoaB7
13:36 sri you never added the class
13:38 asm35 joined #mojo
13:38 sri re job dependencies in minion, i have a feeling you want to add more stuff to the worker command?
13:38 sri i don't really want that to get bigger
13:38 bpmedley sri: https://bitbucket.org/snippets/bpmedley/RoaB7#data_example.pl-7 <-- I'm confused doesn't this load the class?
13:39 sri sure, but nobody uses data_section like that
13:39 sri normally you'd add the class to the renderer/static file server
13:39 bpmedley Understood.  However, there was confusion at the beginning of the conversation.
13:39 bpmedley I didn't realize they were using -e to run the app.
13:41 bpmedley Does the conversation make a little more sense given that I didn't know how they were running the app and had made a wrong assumption?
13:42 sri little
13:45 sri i wanted to do job dependencies with ->enqueue(foo => {depends_on => [23, 45]})
13:45 sri but there's a lot of problems with that approach
13:45 sri like keeping dequeue fast
13:45 bpmedley sri: With regards to the dependencies in Minion.  I may be really off the mark; however, can sequentially enqueued jobs be guaranteed to run in order by changing "order by priority desc, created" to "order by priority desc, id"?    (https://github.com/kraih/minion/blob/master/lib/Minion/Backend/Pg.pm#L191)
13:46 sri bpmedley: no
13:46 sri just imagine you have separate workers to handle jobs
13:47 sri one to do x and one to do y, y might be enqueued second, but the worker is faster... race condition
13:48 sri (x and y being different tasks)
13:50 sri always remember it's a distributed system, and things can be eventually consistent
13:52 bpmedley Understood, thanks for listening.
13:53 sri you could try just adding jobs with different priorities to get the same result
13:53 sri first one with a low priority, second with a high priority
13:53 sri umm
13:53 sri the other way around ;p
13:54 sri but the same problem applies
13:54 sri if you have special workers for certain tasks
13:54 bpmedley Sorry, I was thinking that sequentially starting the job in order would be ok; however, they jobs need to start in order and not start until the prior dependencies have finished.  Eek.
13:55 sri jobs are already performed kinda ordered
13:55 sri right, that's another problem
13:55 CandyAngel Couldn't you use the frozen task workaround and a heartbeat to start frozen task whose dependencies have been finished?
13:56 sri CandyAngel: then you could just as well have earlier jobs enqueue the followups
13:56 sri if they unfreeze the job or enqueue it themselves, doesn't really make much of a difference there
13:57 CandyAngel sri: That might mean giving the worker a load of information that it doesn't do anything with *except* passing it on
13:57 sri possibly
13:58 CandyAngel Which might mean it gets passed on a whole bunch of times
13:58 CandyAngel Errr
13:58 CandyAngel Which has a name
13:58 sri but that's not really a big deal imo
13:58 CandyAngel Inversion of Control or something
13:58 sri anyway, both are hacks
14:00 CandyAngel Also
14:00 CandyAngel Actually, never mind
14:01 CandyAngel What I was thinking of would happen either way (code changes with outstanding jobs
14:01 CandyAngel The freeze/thaw thing gives you a more accurate idea of how many jobs are outstanding though
14:01 CandyAngel Rather than having phantom jobs you don't know about until the previous job is done
14:02 sri if the earlier job fails, you have phantom frozen jobs
14:02 sri that will never get performed or cleaned up
14:02 CandyAngel True
14:04 sri you bring up a good point though, that's an ugnly problem even if the dependencies are stored with the job
14:05 sri s/n//
14:05 sri error handling gets messy
14:05 sri you have to search for depending jobs
14:05 CandyAngel Yeah
14:06 * sri doesn't like job dependencies anymore
14:06 CandyAngel :P
14:07 CandyAngel Just like I don't like firefox
14:07 CandyAngel 2 tabs, no addons.. 1GB of RAM used
14:07 sri that's every browser now
14:08 panshin joined #mojo
14:08 CandyAngel (on the flipside, when running out of memory because of firefox locked my VM up, nothing else on my computer was affected!)
14:08 CandyAngel So my new setup is amazing! <3
14:09 pink_mist CandyAngel: 0_o you must have some humongous pages in your firefox ... here I have 20 tabs, about a dozen addons, and 350MB ram used by firefox
14:09 pink_mist that's still more than I'd want though ... but it's nowhere near 1GB
14:09 CandyAngel It was 2 github pages
14:10 pink_mist 4 of mine are github pages
14:10 pink_mist no wait, sorry
14:10 pink_mist 3
14:10 CandyAngel Firefox just hates me :)
14:29 Craftsmanship sri: If you want a more complex queue manager, wouldn't it amke sense for the minion thing to simply enqueue a thing with $queue_software that manages depends nicely?
14:33 jberger Craftsmanship: if you are going to go that far, you don't need minion in the middle of it
14:36 sri yea, i don't understand that question, why use minion at all then?
14:48 jberger sri: I think that actually holds the answer, at least for me
14:48 jberger Minion is excellent for what it is, if you need more, use a more comprehensive system
14:49 CandyAngel You could still use Minion as an interface
14:49 jberger But minion would need an interface to the other system
14:50 jberger Which would be implemented in your app, since that's how minion works
14:50 jberger So what do you gain?
14:50 CandyAngel Aren't minion workers less distruptive to restart than the whole app?
14:50 CandyAngel So like
14:50 CandyAngel If you have Minion passing jobs to X, then you want to change it to Y, you just change the code and start new workers
14:51 CandyAngel And it'll naturally switch over without interrupting
14:51 CandyAngel I think?
14:51 jberger You can do the same with hypnotoad
14:51 CandyAngel Ah okies
14:51 CandyAngel Not used hypnotoad yet
14:51 jberger Zero downtime restarts
14:51 CandyAngel Coolies :)
14:52 jberger Its key feature over the prefork server
14:56 Craftsmanship jberger: that's a fair point. I suppose it's fine to leave the "queue this" action in Mojo, since it doesn't have to block. There really wouldn't be a need to enqueue your "please enqueue this" request wiht minion
14:58 sri there seems to be a misunderstanding here, no part of minion is actually in mojolicious
14:58 Craftsmanship "minions queue" then
14:58 sri minion just ships with a plugin
14:59 SmokeMachine sri: I remembered read that, but I couldn't find it again! Thanks!
15:00 meshl joined #mojo
15:02 sri Craftsmanship: also, is there a specific job queue you're referring to?
15:02 sri so far job dependencies seems like mostly a python thing to me
15:04 Craftsmanship I can't imagine why a language would change the order your tasks need to be done in.
15:04 Craftsmanship there are a couple of things at $day_job that require a sync-point like the one job depends would give you.
15:04 Craftsmanship say, scaling photos before publishing a blog post, as an example.
15:12 sri it appears we are talking past each other
15:13 Craftsmanship i'm replying directly to "so far job dependencies seems like mostly a python thing to me"
15:13 Craftsmanship and wondering why someone would think that, for example, a perl program wouldn't care about it's order of tasks
15:13 sri which perl job queue does that?
15:13 sri i'm talking about specific job queues
15:14 sri like celery and rq
15:14 sri how does perl as a language fit into the argument?
15:15 Craftsmanship .oO { maybe if I paste that quote again it will clarify }
15:15 meshl joined #mojo
15:15 thowe joined #mojo
15:15 mattastrophe joined #mojo
15:15 sri i said it seems to be a python thing because i've only seen the feature in python job queues
15:16 sri as in, the python community appears to have made it a thing
15:16 Craftsmanship ah - celery has it, so it's a python thing, i see.
15:16 sri multiple python job queues
15:16 sri was that sarcasm?
15:16 Craftsmanship nope.
15:17 Craftsmanship you /do/ hold the position I thought you do.
15:17 sri ah
15:19 thowe :|
15:21 Craftsmanship one of the solutions i've seen involve each task in teh queue checking every task is done, and then adding a new queue item for the dependent task, but dragging all that state around is trouble
15:21 Craftsmanship and the race between the last 2 workers is troublesome too
15:24 meshl joined #mojo
15:25 Craftsmanship (it ends up that this race / last worker death thing ends up with you needing to check the state of everything to re-queue that dependent job that was missed)
15:36 meshl joined #mojo
15:39 kaare_ joined #mojo
15:44 PryMar56 joined #mojo
15:52 abra joined #mojo
15:53 abra joined #mojo
16:02 sri without a really really good solution i'm fine with what we have now
16:03 sri you can limit concurrency for tasks by having a special worker for only that one task with a -j flag
16:04 sri and you can do task dependencies by enqueueing them with different priority settings and keeping those tasks on the same workers
16:04 ZoffixWork joined #mojo
16:04 Craftsmanship so you have one task type as a mutex for the depends ...
16:04 Craftsmanship sounds like a good blog post
16:04 sri of course there's the problem of the final task x which can overlap with the first task y
16:05 sri but there's other workarounds too, like having task x actually enqueue the followup task y
16:05 sri that's good enough for me
16:05 Craftsmanship yep, it's a feature that you can't do with a simple queue, so it would require someone to *gasp* write some code to do it.
16:39 asm35 hi. sorry if it's a noob question but does Mojo provide something similar as an authentication mechanism as a plain login with a .htaccess file?
16:40 asm35 (i mean without setting up an Apache server)
16:54 asm35 left #mojo
17:00 PopeFelix What's the word for several interrelated HTTP transactions?  An  interchange?
17:10 abra_ joined #mojo
17:14 bpmedley sri: If you have time, I have some example code (mostly SQL) for the Minion dependency question: https://github.com/kraih/minion/compare/master...brianmed:master  (I have done some rudimentary testing of the _try sub).
17:17 ZoffixWork joined #mojo
17:17 panshin joined #mojo
17:18 sri bpmedley: the column needs to be bigint
17:18 sri i don't understand the purpose of the dependencies method
17:18 sri or enqueue_dependent
17:19 bpmedley I meant to mention that the dependencies method probably should be there.
17:20 sri what is the performance cost?
17:21 bpmedley sri: https://bitbucket.org/snippets/bpmedley/4RaAb <-- Here example usage. Please note line 49.
17:21 ZoffixWork left #mojo
17:21 sri ah
17:21 sri i guess that makes sense
17:21 bpmedley I don't know the performance cost.  Sounds like there may be interest, so I'd be happy to try and do some benchmarks.
17:22 sri so the dependencies method does not serve a purpose?
17:23 bpmedley https://bitbucket.org/snippets/bpmedley/xR5AB <-- New one without private data
17:24 bpmedley sri: I was thinking of using it so a running job could find their dependencies.
17:24 sri enqueue_dependent seems more like enqueue_chain or so
17:24 bpmedley I'll update
17:24 sri bpmedley: ok, then the dependencies method is just bad
17:25 bpmedley Gaw.. Should a running job be able to get its dependencies?
17:25 sri you can already get the info from job_info
17:28 bpmedley https://github.com/kraih/minion/compare/master...brianmed:master <-- Reload pls
17:29 sri not sure what the chances of this are, the try query is getting extremely complicated there
17:30 sri i think you can shorten the column name to deps, like args
17:34 bpmedle__ joined #mojo
17:35 PopeFelix I think we need a "how do you say this" channel. ;)  Seems like half the questions I ask lately are of that nature.
17:38 bpmedle__ sri: I committed the changes
17:39 bpmedle__ Also, no current testes were broken, it seems
17:40 sri tests and benchmarks would be good
17:40 bpmedle__ Sweet, it will take time.  I'll try by end of day tomorrow.  That SQL was a pita.
17:40 sri hmm, i wonder if there are locking problems
17:41 sri with postgres 9.5 dequeue could be super fast with SKIP LOCKED
17:41 sri but this actually performs extra selects on the table
17:41 sri does that have a negative effect?
17:43 bpmedle__ Hrmm.  The subquery on dependent_counts is a temporary table; however, there's usually a better way.  I spent several hours on the SQL and wanted to put show some code sooner rather than later.
17:43 sri sure
17:44 sri it does handle the case where jobs we depend on are missing, right?
17:44 sri like, already cleaned up
17:44 bpmedle__ I hadn't thought of that.
17:44 bpmedle__ Let me test.
17:46 sri i imagine a dep is resolved if the dep job is finished or missing, that's it
17:47 sri any other state should block the following job
17:47 bpmedle__ The goal with the SQL was to block a chain if there was a prior failed job or a prior active job.
17:48 bpmedle__ And, that seems to work if a prior job is missing.
17:48 sri oh, that sounds wrong
17:48 sri a prior inactive job has to block followups
17:48 sri you might be lucky here because of created timestamp ordering
17:49 sri but there's race conditions
17:49 bpmedle__ I'm ordering by id, I thought.
17:49 bpmedle__ The assumption is that the chain will be inserted in order.
17:49 sri hmm, lots of special cases :/
17:51 bpmedle__ Won't the serial column in minion_jobs enforce a chain being inserted in order?
17:52 sri oh, you changed the order
17:52 bpmedle__ Yip
17:53 sri same problem though, what if in a chain of x -> y -> z the job y has a higher priority than job x
17:53 bpmedle__ My other assumption was that there would have to be the same priority for all jobs in a chain.
17:54 bpmedle__ Sorry, I should have said that.
17:54 sri that's not true
17:54 sri you can't guarantee that
17:55 sri the system has to be defensive and cope with errors there
17:55 bpmedle__ Perhaps in enqueue_chain there could be an error when inserting jobs with different priorities?
17:56 sri i'm not actually sold on enqueue_chain
17:56 sri in my mind it's ->enqueue(foo => {depends_on => [3, 7, 8]})
17:56 sri so there's lots of room for getting it wrong
17:57 sri there shouldn't be a need to enqueue all depending jobs together
17:57 bpmedle__ Hrmm.  Want me to rework some of this under the depends_on guidance?
17:58 sri i don't know what's right, just saying how i imagined it
17:58 bpmedle__ Want me to at least try and show you a working example with depends_on?
17:58 sri sure
17:59 sri number one rule for minion is that everything needs to be rock solid btw.
17:59 sri so, always make it as defensive as possible
17:59 sri ideally self repairing
18:01 sri the sql should be easy enough to harden though, my biggest worry is still dequeue performance
18:02 sri there have been others interested in job dependencies, maybe some of them would like to chime in?
18:22 Kogurr joined #mojo
18:27 CandyAngel Regarding performance, I would be okay where, in general, jobs without dependencies get executed first
18:27 bpmedle__ CandyAngel: Would that lead to starvation?
18:28 CandyAngel Depends on what workers you have
18:28 CandyAngel if you have task resize_image
18:28 CandyAngel And some have dependencies and some are standalone, the standalone ones would get executed first, then when it found none to do, it'd run the (slower?) check for ones with depends
18:29 CandyAngel Or workers could be launched with --mainly-depends
18:29 CandyAngel And that one would check ones with dependencies first
18:29 * CandyAngel is just throwing ideas about
18:29 bpmedle__ What if there are more standalone jobs inserted while resize_image blocks?
18:29 CandyAngel They get run first
18:30 CandyAngel So ummm
18:30 CandyAngel When it goes to dequeue the next job, it does
18:30 CandyAngel SELECT job WHERE deps IS NULL LIMIT 1
18:30 CandyAngel or whatever
18:30 CandyAngel If it gets a job from that, it returns it
18:30 bpmedle__ Is it possible that so many standalone jobs that the resize_image job misses an SLA?
18:30 CandyAngel If not, it falls through to the fetching the next job that needs depends done first
18:31 CandyAngel In that case, have a worker with --mainly-depends
18:31 bpmedle__ jobs that => jobs get inserted that
18:31 bpmedle__ Sounds valid, yet complicated.
18:31 CandyAngel Inspired by --write-mostly from mdadm :P
18:32 CandyAngel Default behaviour that prioritises performance is fine if you can override it
18:32 sri what's the point of perfomring jobs without deps first?
18:32 CandyAngel Just to reduce performance impact
18:32 sri my performance concern is specifically about avoiding delays caused by locking in postgres
18:33 CandyAngel Ohh
18:33 sri that's the bottleneck
18:33 sri 100 workers concurrently sending the same query looking for jobs
18:33 CandyAngel Has dependency circles been discussed?
18:33 sri you have to make sure they don't block each other for too long
18:35 sri in postgres 9.5 we will be using SKIP LOCKED, which with the current sql can avoid all delays due to locks
18:35 sri dequeue would just skip locked rows
18:36 sri now bpmedley is adding a complicated subquery which could change that
18:36 CandyAngel I see
18:38 sri dependency circles have not been discussed, and would indeed be a problem
18:40 sri bpmedley++ # for trying (i hope you're not too sad if this doesn't work out)
18:40 bpmedle__ No, not sad at all.  I've learned quite a bit.
18:42 sri my gut feeling says there's too many problems, but maybe they can all be resolved
18:42 bpmedle__ Is the main issue priorities, you think?
18:43 bpmedle__ Or rather, the main unsolved issue.
19:00 CandyAngel Why are priorities an issue?
19:00 sri heh, was about to ask the same
19:02 sri real issues i remember are performance (working well with SKIP LOCKED to avoid slow downs at high concurrency), dependency circles, making sure only a finished state or missing job resolve a dependency
19:02 CandyAngel Missing jobs?
19:03 sri and then come higher level problems, like what happens if job y in a x -> y -> z chain fails
19:03 sri a cleaned up job
19:03 CandyAngel Oh, because X could be cleaned up before Z is run?
19:04 sri yes
19:04 sri like, you could have 500 deps, which take forever, and the first few have already been cleaned up when the last finishes
19:04 CandyAngel Hm
19:04 CandyAngel Would it check the jobs you are saying it depend on exist before enqueuing?
19:05 sri no
19:05 bpmedle__ It's the SQL.  Given the following tasks: convert_photo, resize_photo, crop_photo, and email_photo [5].  Each depends on the former and email_photo has priority of 5.  Then, email_photo will sort to the top before convert_photo.
19:05 sri the system is eventually consistent, integrity is not guaranteed
19:06 sri bpmedle__: that doesn't seem like a problem to me at all
19:06 sri it has deps, so it gets skipped
19:07 sri however, it can be bad for performance of course
19:08 bpmedle__ Hrmm.  Sorry for my confusion.  Given the four jobs above what order should they be ran: convert, resize, crop, and email or email, convert, resize, and crop?
19:09 sri how can email run if it has deps on the other?
19:10 sri oh, another problem someone noticed the other day with deps
19:10 bpmedle__ email should always happen last, in my opinion.  However, right now, my SQL is broken when using priorities.
19:10 sri in postgres we use notifications to wake up workers
19:11 sri but with deps there will be no notification for followup jobs
19:11 bpmedle__ Hrmm.  So that means, only notify when the last job in the chain has finished?
19:12 sri no
19:12 sri it means notify when a job gets finsihed
19:12 sri which can cause new problems ;p
19:12 bpmedle__ Gotcha, so that code doesn't need to change.
19:12 sri since you're waking up workers much more often, you don't know if a job has other jobs depending on it
19:13 sri i'll have to link to this in the future whenever the topic comes up again ;p
19:14 sri since i'll forget all those problems again in about 15 minutes
19:14 bpmedle__ I'll try and get priorities done with the SQL.  However, it may take some time.
19:15 sri what does that mean?
19:16 sri priorities should not be a problem at all
19:16 sri they have no meaning here
19:16 bpmedle__ I could be confused.  Let me put together a pastie.
19:18 bpmedle__ https://bitbucket.org/snippets/bpmedley/895ne  <-- Does this look like a proper Mojolicious::Lite app that uses depends_on?
19:38 bpmedle__ https://bitbucket.org/snippets/bpmedley/z95nn <-- This is the output from the current SQL.  Notice how job 6 sorted to the top with its priority of 10.  So, my current implementation is off.
19:56 mattastrophe joined #mojo
20:50 panshin joined #mojo
21:19 marty joined #mojo
21:19 Grinnz joined #mojo
21:29 asarch joined #mojo
21:29 nic I was hoping to contribute to the dependencies discussion by researching celery-tasktree, but I don't grock the python, so nope
21:29 bpmedley_ nic: Perhaps might look at the SQL I wrote?
21:30 nic k
21:30 nic but didn't you say right after that it's wrong?
21:31 bpmedley_ I think it's wrong for priorities.  If all jobs in the chain have the same priority, I think the SQL will work.
21:32 bpmedley_ So, we have a few issues.  Making the SQL work for priorities, speed of execution, and probably something else.
21:33 sri http://irclog.perlgeek.de/mojo/2015-09-26#i_11279516
21:33 sri that's where i listed issues
21:33 nic One thing I wanted to mention... my view is that if job B depends on job A, then almost by definition job B can expect a bit of latency
21:33 nic so it's reasonable to have latency between A completing (successfully) and B launching
21:33 jb360 joined #mojo
21:34 nic so if that can be done without hindering jobs that have no dependencies, win
21:34 bpmedley_ nic: That makes sense to me.
21:34 * nic reads that bit of the log
21:36 nic Unfortunately I currently have no familiarity with Minion internals (looking forward to reading them tho)
21:36 nic I saw one of the python job queues has "pluggable queue managers"
21:36 bpmedley_ nic: Feel free to ask questions.  The design by sri is very elegant.
21:37 sri so, to sum up the problems again, a) performance (we want very good concurrency with SKIP LOCKED in postgres 9.5), b) dependency circles, c) only finished and missing jobs should resolve dependencies, d) slow start (jobs with dependencies do not get notifications), e) resolving issues with failed jobs in the middle of a chain is hard
21:37 nic My ideal would be if Minion core could stay nice and lean (and so demonstrably correct) and there was somehow a way for others to try out experimental, more complicated managers
21:38 bpmedley_ nic: Just make a branch in git with a pluggable backend.
21:39 nic bpmedley_: yeah, I think that's the nub.  I was lying down thinking how to make dependencies really efficient, and (a) some aspects don't need to be efficient, and (b) none of my ideas would ever be elegant
21:39 sri python job queues tend to be a total mess, much of the code is unreadable
21:39 sri especially celery
21:40 nic ah, it's not completely my fault then [<-- could not read it]
21:40 sri and the backends are very mixed quality
21:40 sri some are even discouraged by the authors... and those ship with celery core
21:40 nic :)
21:40 bpmedley_ Gaw
21:41 mattastrophe joined #mojo
21:43 nic sri: is (d) 'slow start' referring to dependent jobs suffering extra latency?
21:44 nic (if so, I don't think that's a problem, it kind of comes with the territory)
21:44 bpmedley_ nic: The dependent jobs won't get pub/sub notifications, I believe.
21:45 nic so it's about needing an additional mechanism for launching them?
21:45 bpmedley_ nic: I don't think, perhaps perusing the code would be helpful?
21:46 nic yeah, I need to read the source and play
21:46 marty joined #mojo
21:46 bpmedley_ nic: https://github.com/kraih/minion/blob/master/lib/Minion/Command/minion/worker.pm#L9
21:47 nic sri: In (c) 'only finished and missing jobs should resolve dependencies', I don't understand the mention of missing jobs
21:47 TheGrinnz joined #mojo
21:47 nic does it mean that if a job disappears (failure) then you know its dependents are doomed?
21:50 mattastrophe joined #mojo
21:53 sri nic: no, literally disappear, as in getting cleaned up
21:53 sri http://irclog.perlgeek.de/mojo/2015-09-26#i_11279529
21:53 nic thanks
21:55 sri i still don't understand the problem with priorities though, the logic should be just to check the list of deps, and see if they are missing or finished
21:55 sri and that's it, order is irrelevant
21:55 bpmedley_ sri: My logic is currently somewhat flawed.
21:56 meshl joined #mojo
21:56 bpmedley_ I'd be happy to show you the flaw in the logic, if you want.
21:57 sri i guess e) could be solved by including jobs that depend on this job in job_info
21:57 bpmedley_ Perhaps with a dependencies method.. :P
21:57 sri bpmedley_: nono, i'm just here to complain ;p
21:58 sri no dependencies method, it serves no purpose
21:59 sri can be done like worker_info with the list of jobs
21:59 nic yes, would need to know which jobs are problematic-by-proxy
21:59 sri my main worry is still a)
22:01 sri it needs to be 100% clear that the subquery has no effect on the performance with SKIP LOCKED
22:01 bpmedley_ There's usually a way to remove a subquery.  I just wanted to show the code quickly.
22:12 nic bpmedley_: Are you able to benchmark what you have vs the original version?
22:12 bpmedley_ nic: I think that's possible, I have not tried it.
22:17 sri really important, benchmark with concurrency
22:17 sri like 5 workers
22:18 sri and very fast jobs
22:18 sri dequeue speed is what matters
22:24 bpmedley_ Migrations are all the awesome.  Just run your app and tables are working: ????????????
22:33 dvinciguerra_ joined #mojo
22:52 sri just thought of something
22:52 sri d) is not a problem actually
22:52 sri since the worker that finished the job will also check for new jobs right away
22:53 sri there should be little delay
22:54 sri b) might be unfixable
22:54 sri we'll have to decide if that's a problem
23:00 dvinciguerra joined #mojo
23:33 preaction I made a very simple message broker using Mojo websockets: http://preaction.me:3000 is something like this useful enough to release? i can add more patterns, trying to achieve support for zeromq/nanomsg patterns
23:43 cpan_mojo Mojo-PDF-1.003001 by ZOFFIX https://metacpan.org/release/ZOFFIX/Mojo-PDF-1.003001
23:48 jberger joined #mojo
23:49 jberger sri: I'd think that dependency circles are resolved by "don't do that"
23:49 jberger It would almost be hard to do with "depends_on"
23:49 jberger preaction++ reading now
23:50 nic preaction: v nice
23:51 sri jberger: good point, it is hard to do with that api
23:52 jberger preaction: does that work with preforking?
23:52 jberger I still don't get it :s
23:52 jberger Also my original plan was to multiplex topics over a single websocket
23:53 jberger To reduce the number of connections each client needs
23:54 bpmedley_ preaction: Works for me.. :)
23:57 bpmedley_ preaction: I can get a message in Firefox and Chrome at the same time.  Super sweet.  I hope you release it.
23:59 preaction jberger: no, you can't prefork the broker itself. i'm going to make a Mojolicious::Command::broker that'll make it more obvious what to do to enable server-side communication
23:59 jberger Ah

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary