The web in a box - a next generation web framework for the Perl programming language

IRC log for #mojo, 2017-04-02

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 dave joined #mojo
00:12 stryx` joined #mojo
00:40 disputin joined #mojo
01:23 disputin joined #mojo
01:56 lluad joined #mojo
02:46 tchaves joined #mojo
04:04 dboehmer_ joined #mojo
05:46 asarch joined #mojo
07:45 itaipu joined #mojo
09:03 dotan_convos joined #mojo
09:08 sh14 joined #mojo
09:16 coolo sri: I wanted to run a minion job in foreground to debug a crash - I managed to do this very manually. A minion worker --run 27 would have been easier to get there
09:44 dod joined #mojo
09:49 dod joined #mojo
10:18 noganex joined #mojo
10:38 stryx` joined #mojo
10:50 sri coolo: unless you made custom sql, you can't really do it manually without causing race conditions if there's another worker still running
10:50 sri we basically need an option for dequeue that makes it only dequeue that one job
11:01 sri marcus: the pluggable backends for morbo idea is dead?
11:07 * sri closes https://github.com/kraih/minion/issues/26
11:08 sri that only really leaves unique jobs and an admin ui as missing features in Minion
11:54 whidgle joined #mojo
12:37 marcus sri: What is actually missing on my pull request?
12:39 marcus sri: I know jthorsen wants it to be a more generic component, but I'm not sure if that's what the rest of the devs want. I could do the work, but if there's no interest for the feature in general, I don't see the point.
12:57 sri marcus: didn't we find many problems with it?
13:02 sri see, this is exactly why i want to close pull requests fast
13:03 sri marcus apepars to have pushed updates for our complaints, but didin't comment, and now the changes have been ignored there for two weeks
13:03 sri the pull request is basically dead to the community
13:04 sri well, since i'm trying to prove a point i'll just let it play out on its own now
13:06 trone joined #mojo
13:09 sri marcus: i do not want something more generic
13:10 sri as far as i'm concerned that could have been applied two weeks ago
13:10 sri but it's not up to me anymore
13:10 marcus sri: Good. Btw, I think what went wrong was that we were using the backchannel for communications rather than the review system on the pull request
13:13 marcus sri: I'm not sure how voting works when the idea was proposed by you and implemented by me.
13:14 pink_mist I'd expect your own vote to be +1 unless you say otherwise, but sri should need to vote explicitly
13:14 marcus in general now with 4 active core members, you just need one other core to close a tie? Anyways, jberger, batman, how do you feel about merging #1073 as it stands?
13:55 asarch joined #mojo
15:09 jberger heh, ok I'm pulling out the laptop
15:09 jberger always dangerous while I'm watching the sunday news shows?
15:21 jberger marcus: can you explain how your Inotify backend works?
15:21 jberger does it block until files are changed?
15:21 jberger I notice it doesn't use the timeout that your documentation specifies that it should
15:53 jberger sri: re expiring locks
15:54 jberger could it just be two new columns on the job, say lock_name and lock_expires
15:54 jberger and then in the _try query it simply adds something like:
15:56 jberger where not exists (select 1 from jobs locked where locked.lock_name=j.lock_name and lock.lock_expires < current_timestamp)
15:57 jberger where j is the job that is being evaluated
15:57 jberger (already in the statement)
16:17 PryMar56 joined #mojo
16:19 sri marcus: always normal votes
16:19 sri jberge: that does not really make sense
16:21 sri *+r
16:21 sri jberger: how would that apply to dependencies?
16:22 sri at work i had a group of jobs with 3 stages foo -> bar -> baz
16:22 sri foo created the lock, and baz released it
16:22 sri and bar was actually multiple bar jobs running parallel
16:23 sri that's a pretty common pattern
16:24 sri or another use case, what if i wanted to lock access to a shared resource?
16:25 sri like, say a very fragile backend service
16:25 sri or, better yet, two fragile backend services from the same job
16:25 jberger ok so you want the lock to be independent
16:25 jberger this isn't just uniquing
16:25 sri it's a new primitive that allows for unique jobs
16:26 jberger my pitboss system has that of course, but its all stored procs
16:26 jberger I wonder what if my attempted minion backend ever had that?
16:26 jberger I don't recall if I had started on locking before moving to stored procs
16:28 sri unique at dequeue time is a very uncommon pattern btw.
16:28 sri i have not seen any commonly used job queue doing that
16:29 sri jberger: what's your stored procedure for expiring locks?
16:30 sri even better than expriring locks would be expriring buckets :)
16:30 jberger I actually don't expire them
16:30 jberger we made the decision that if a server failed to unlock that we didn't want anything to proceed
16:30 sri like for allowing 5 jobs at a time to access that fragile backend service
16:30 jberger and require human intervention
16:31 jberger but of course that shows why we wrote our own, it has purpose-built features
16:32 jberger sri: here is the PgExtra backend
16:32 jberger https://gist.github.com/jberger/3cb978b1a376b87f5f3bff827c13f96d
16:32 jberger it does have the dequeue as a stored proc, but I don't think the lock features depend on it being so
16:34 sri ok, that makes it much less interesting for minion
16:34 jberger hmmm, the more I read that, that logic did change between that point and my current Pitboss job queue too
16:34 jberger because I now support shared and exclusive locks
16:36 jberger and yeah, the Pitboss version is totally not possible without stored procs
16:36 sri guess i could make locking really simple
16:37 sri even supporting shared locks with a limit
16:37 sri table like (name, num, expires)
16:38 sri name being arbitrary, num the current number of shared participants, and expires the latest value a consumer supplied
16:38 jberger "CONTINUE WHEN NOT pitboss_acquire_locks(batch.id, batch.exclusive, batch.shared);"
16:38 sri jberger: sorry, i lost interest, no need to paste more
16:38 jberger yeah, I just showed why it isn't possible
16:38 someguy joined #mojo
16:38 jberger I'll stop :-P
16:39 sri you've clearly taken a very different route, unrelated to the minion problems
16:39 someguy does it make sense to ->under( '/' => sub { ... } ) more than once, for the sake of different authentication states?
16:40 sri my $got_a_lock = $job->try_to_lock('foo_lock', 5, '10 minutes');
16:40 sri $job->unlock('foo_lock')
16:41 sri only the 5 is new
16:41 sri it would set the limit for the current consumer
16:42 sri definitely requires a stored procedure though
16:42 sri but covers *a lot* of use cases
16:43 sri basically you could to "sleep 1 until $job->try_to_lock('fragile_backend_service', '20 minutes', 5);"
16:44 sri that would make your job wait for the next lock to be available in a pool of 5
16:44 jberger doesn't that hold a worker process hostage though?
16:44 sri a job process
16:45 sri workers are heavily concurrent anyway
16:45 sri you could just as well finish the job and retry with a delay if it matters to you
16:45 jberger I suppose
16:46 jberger the more I think about this, this was part of the decision to go to stored procs, which again, wouldn't be minion's niche
16:49 sri actually "return $job->fail('Fragile backend service overworked') and $job->retry({delay => 10}) unless $job->try_to_lock('fragile_service', '10m', 5);"
16:50 jberger a reason someone might not want to do that would be that it would emit a fail event
16:50 sri that's silly
16:50 sri the job failed
16:52 sri the real reason is that it might retry automatically
16:52 sri i guess allowing retry from active state makes more sense
16:52 jberger in my mind, requeuing isn't a fail, but minion doesn't have a requeue, so that is by definition a fail
16:53 sri now that we have job versioning it's safe anyway
16:53 sri yea, i'd just allow retry of active jobs
16:53 sri return $job->retry({delay => 10}) unless $job->try_to_lock('fragile_service', '10m', 5);
16:53 jberger +1
16:53 purl 1
16:53 sri that's how the pattern would work
16:54 * jberger pats purl
16:54 * purl bites!
16:54 sri try_to_lock needs a fancy name though
16:54 jberger I call mine acquire_lock
16:55 sri maybe_lock
16:55 sri now i really want that primitive, it's so versatile
16:56 jberger yeah
16:56 jberger I think that fits well
16:56 jberger for the minion model at least
17:07 sri part one https://github.com/kraih/minion/commit/59748c62a63304bce5b63dcf04b0393edba2bb5c
17:08 sri it's a nice pattern in general "return $job->retry({delay => 30});"
17:09 sri for any job to reschedule itself to a later time if something gets in the way
17:20 jberger nice
17:21 rshadow joined #mojo
17:23 lluad joined #mojo
17:27 plicease joined #mojo
17:28 kamyl joined #mojo
17:28 genio joined #mojo
17:31 someguy joined #mojo
17:31 stryx` joined #mojo
17:31 perlpilot joined #mojo
17:31 bobkare joined #mojo
17:31 ksmadsen joined #mojo
17:31 VVelox joined #mojo
17:31 mtj joined #mojo
17:31 dso joined #mojo
17:31 mtths joined #mojo
17:31 mishanti1 joined #mojo
17:31 chandwki joined #mojo
17:31 kaare joined #mojo
17:31 pink_mist joined #mojo
17:31 wardenm joined #mojo
17:31 geheimnis` joined #mojo
17:31 mbudde joined #mojo
17:31 ranguard joined #mojo
17:31 cstamas joined #mojo
17:31 tianon joined #mojo
17:31 iamb joined #mojo
17:31 litwol joined #mojo
17:31 Sebbe joined #mojo
17:31 hahainternet joined #mojo
17:31 matt_ joined #mojo
17:31 Jonis joined #mojo
17:31 haarg joined #mojo
17:31 a6502 joined #mojo
17:31 px80 joined #mojo
17:31 Gedge joined #mojo
17:31 nic joined #mojo
17:31 oalders joined #mojo
17:31 nicomen joined #mojo
17:31 HtbaaPi joined #mojo
17:31 abracadaniel joined #mojo
17:40 rshadow joined #mojo
17:47 marcus jberger: I'm not really providing a Inotify backend, I only made one to test my branch. And it was made before sri and jhthorsen asked me to move the sleep to the backend, so it didn't block at all, but checked once every minute for new files.
18:03 sri hmm, i guess locks are really minion scoped, not job scoped
18:04 sri $job->minion->unlock('foo')
18:08 sri hahaha, i can already see someone pull minion into a project just as a distributed lock service
19:04 jberger marcus I realize you aren't intending to release an Inotify backend, but I kinda want to see a working push-like service implemented
19:04 jberger before I'd be onboard
19:04 jberger I'm not saying I'd need you to release it or even document/test it
19:11 Janos joined #mojo
19:22 sri argh
19:22 sri guess i've found a flaw in the design
19:23 sri just bumping the expires date could end badly for a busy queue
19:24 sri if there's a lock with 10 shares, and one job forgets to unlock that extra lock remains forever
19:24 coolo why is assetpack being initialized for every minion job?
19:24 sri since it never expires if the queue is busy enough
19:27 sri bummer
19:28 sri well, here's my experiment, if someone feels like improving it https://github.com/kraih/minion/compare/expiring_locks
19:32 sri hmm
19:33 sri i guess instead of a column to count lock holders there could just be multiple rows for the same lock name
19:33 sri yea, much better
19:33 sri but i'm tired ;p
19:34 sri or does it actually work
19:34 rshadow joined #mojo
19:35 sri an ->unlock call would delete the row with the oldest expires first
19:36 sri so, i guess there's still a high chance for wasted resources
19:43 sri only exclusive locks are easy
19:50 sri hmmm
19:50 sri i guess you can build shared locks on top of exclusive locks
19:52 sri (using as many unique lock names as you need)
19:52 sri downside is of course that you have to remember that name and pass it along to the unlocking job
19:57 rshadow joined #mojo
20:04 sivoais joined #mojo
20:07 Janos_ joined #mojo
20:09 marcus jberger: Cool, I'll update it to the current pr.
20:25 someguy is there a $c->url_for(...) that takes route names?
20:27 pink_mist what's wrong with $c->url_for?
20:28 someguy route names seem more trustworthy than path fragments
20:28 pink_mist so what's wrong with $c->url_for?
20:28 someguy do you mean to say that it does take them, or ?
20:29 pink_mist of course
20:29 someguy is that the form with the #?
20:30 pink_mist I ... don't think so?
20:31 someguy 'cause I found myself in a circular redirect when I said ->url_for('login')
20:31 someguy and i'm sure I have a route called that
20:31 someguy and when pairing that with the inspector telling me about redirects from '/login' to '/login' the math suggests that something is either behing held wrong
20:31 someguy or was crafted with the pointy parts on all sides.
20:32 someguy https://metacpan.org/pod/Mojolicious::Controller#url_for
20:32 someguy it lists many forms, I don't see one that's explicitly "name of route" =>
20:34 pink_mist I'm pretty sure 'test' is the named route
20:35 someguy so, no leading slash?
20:35 pink_mist a leading slash would make it a path
20:35 pink_mist so obviously not a route name
20:37 someguy $c->redirect_to( 'login' ), then?
20:38 pink_mist yes
20:38 someguy I have a Location: with the right path, but repeated re-requests for /login :S
20:45 someguy maybe i'm routing it wrong :S
21:02 * sri closes https://github.com/kraih/minion/issues/24
21:02 sri exclusive locks are kinda meh
21:03 sri don't see anything else we can do for now
21:03 sri better cron support is still the goal, but we need new ideas for that
21:11 * sri also closes https://github.com/kraih/minion/issues/44
21:20 pink_mist https://twitter.com/NickVasilyevv/status/846890520513273858 https://twitter.com/sarah_edo/status/847237039351152641
21:30 disputin joined #mojo
21:58 * sri votes -1 on https://github.com/kraih/minion/pull/49
21:58 lluad joined #mojo
22:19 jberger I really don't have strong opinions on any of that
22:19 jberger I'm willing to be sold on it, but I'd need active selling I think
22:30 ferreira joined #mojo
22:34 ferreira sri: jberger: I think being able to customize Minion worker and job classes work the as database and results in Mojo::Pg & Mojo::mysql - it should be much more useful and common to extend jobs (or results) but having this freedom on worker (or database) prevents to get in the way of making useful / non-anticipated extensions to these
22:35 castaway joined #mojo
22:37 genio sri: Would this look reasonable for the length problem (assuming I wrote proper tests)?
22:39 genio nevermind.
22:39 purl Well piss off then, genio
22:39 * genio hugs purl
22:39 * purl smiles
22:41 genio should have run the test suite before asking.  breaks several things
22:41 kiwiroy joined #mojo
22:43 genio Although I guess these _should_ fail https://github.com/kraih/mojo/blob/master/t/mojolicious/lite_app.t#L488-L495
22:46 sri no
22:51 genio oh, HEAD
22:51 * genio is being especially dumb today
23:12 genio sri: Any better? https://gist.github.com/genio/dfe09035d5bb7ae6b4b2aa6355c5826f  All current tests pass, but I feel like I'm still being dumb about something
23:19 sri genio: better than what?
23:19 sri anyway, i won't get involved before there's no good explanation for the problem
23:20 sri any problem like that should already be covered by premature connection close handling i would have assumed
23:20 sri the need for an actual new special case instead of a small fix somewhere would really surprise me
23:21 sri that said
23:21 genio The test case in the issue shows that it's not being caught by that though.
23:21 sri "length $res->body" is a total no go
23:22 sri loads the whole content into memory
23:23 genio what would be a better way to determine the size of the content already read?
23:24 genio I apologize if I'm asking really stupid questions. I'm trying to be helpy and learn at the same time. but if that's not the case, please feel free to tell me to shut up and leave it alone
23:24 sri the efficient way is not exposed in the public api yet i think
23:25 sri somewhere in the content parsers
23:25 sri ah yea, $res->body is also wrong
23:25 sri not just a vulnerability but also incorrect in case of compression and so on
23:28 genio Then I think that's my clue that a fix is beyond my ability to attempt at the moment
23:28 genio However, I'm confused still if you think it's a non-issue. Is the simple test case not displaying the error I'm attempting to describe?
23:32 genio If not, please don't hesitate to close out the issue as my feelings certainly won't be hurt
23:37 sri genio: i predict the fix will happen here https://github.com/kraih/mojo/blob/master/lib/Mojo/UserAgent.pm#L231
23:38 sri if ($close && ((!$res->code && !$res->error) || $res->content->is_incomplete)) {
23:38 sri something like that
23:38 purl i guess something like that is totally possible
23:39 sri with a new method in Mojo::Content that checks $self->{real_size} or what it's called
23:40 sri guess not exactly like that, since the HEAD check wouldn't work, but something like that
23:40 sri i'm sure there's an elegant way
23:41 stryx` joined #mojo
23:45 irqq_ joined #mojo
23:49 genio looks like the other is_ methods in Mojo::Content are positive, so ->is_complete seems more consistent. but I don't see a clean way to do the HEAD method check in that
23:51 genio my $check_length = uc $old->req->method ne 'HEAD';  if ($close && ((!$res->code && !$res->error) || ($check_length && !$res->content->is_complete)) { ... }
23:59 genio sub is_complete { ($_[0]->{real_size} // 0) == ($_[0]->headers->content_length // 0) }  # or some such?

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary