Perl 6 - the future is here, just unevenly distributed

IRC log for #pdl, 2013-11-26

| Channels | #pdl index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
03:39 gtodd run4flat: so now I retested the lil bug using PDL-2.006 with 5.18.1 and instead of the core dump I see with 2.007 as here -->  http://paste.scsys.co.uk/281132   I get the error message I wanted :-)  ...e.g. http://paste.scsys.co.uk/281662
03:40 gtodd so I guess I'll add that to the PR ...
03:42 gtodd I guess a lot changed from 2.006->2.007 maybe this has something to do with new 64bit indexing ?
03:43 gtodd I'll stick with 2.006 for now since I make the error too frequently ;-)
04:52 jberger__ joined #pdl
13:15 run4flat gtodd, excellent
13:16 run4flat gtodd, did you mean to say you'll stick with 2.007 since you make the error too frequently?
13:16 run4flat :-)
13:33 jberger__ joined #pdl
13:50 run4flat Hey, two nifty tools I recently learned about reading various Perl blogs: http://showterm.io/
13:50 run4flat http://goosecode.com/watson/
13:50 run4flat the first "plays" a recorded terminal session in a browser
13:50 run4flat to see it in action, check out http://techblog.babyl.ca/entry/something-fishy
13:51 run4flat the second is a tool for Github issue tracking and management from the command-line
13:51 run4flat here is a (somewhat confusing) demo: http://goosecode.com/watson/
13:52 gtodd run4flat:   well with 2.007 it core dumps and I lose all my precious cargo culting so .. no :-P   the error message in 2.006 is instructive  "Slice cannot start or end above limit. eval"  but a bit wordy in a perldebug way after that
13:52 gtodd cool
13:53 run4flat oh, I see
13:53 run4flat I mis-read your previous comments then
13:54 vicash run4flat: hello. what are some good perl blogs to read regularly ? i don't read any..
13:55 gtodd that watson thing is neat
13:55 gtodd vicash:  perlr.com  ?
13:55 gtodd :)
13:55 run4flat holy crap check this out: http://showterm.io/62a09cf36b656552f0cdd
13:56 run4flat I usually read blogs.perl.org
13:56 run4flat but I am also subscribed to the Perl Weekly
13:56 run4flat http://perlweekly.com/
13:57 run4flat 90% of the Perl Weekly is on blogs.perl.org
13:57 run4flat but the other 10% is sometimes useful
13:57 run4flat whoah, showterm is astounding
13:59 vicash good video tutorial there run4flat
14:00 run4flat well, it wasn't meant to be much of a tutorial
14:00 run4flat :-)
14:00 run4flat the thing is, I installed the showterm bash script, typed "showterm", and it produced that
14:00 run4flat that's it!
14:01 vicash cool.. for blogs.perl.org i cant seem to find their RSS feed
14:02 * run4flat isn't sure that blogs.perl.org has an rss feed
14:04 * vicash wonders if PDL runs on GPUs
14:05 Mithaldu run4flat: i need an rss aggregator that can filter duplicates
14:05 Mithaldu also, yes, bpo does have an rss feed
14:05 run4flat vicash, no
14:05 run4flat In principle it could be extended to do that
14:05 run4flat but it would take a lot of work to get there
14:06 Mithaldu run4flat: if you had the privilege of using a good browser you'd continually see that nice orange rss logo in the address bar :)
14:06 run4flat Mithaldu, I have never used rss, so I'm blind to such things
14:06 run4flat :-)
14:06 * vicash agrees with Mithaldu
14:07 Mithaldu run4flat: trust me, you couldn't overlook that :D
14:07 * run4flat wonders if he's using a good browser
14:08 vicash run4flat: i wonder if PDL supports GPUs in some form such as for say non-sliciing operations where you apply a lambda to every element of the PDL
14:08 Mithaldu actually don't trust me, just take a look: http://files.myopera.com/Pain%20of%​20Salvation/files/operarssicon.png
14:08 vicash i mean to say if it can
14:08 Mithaldu that icon shows up for me on every page with one or more rss feeds
14:08 Mithaldu in firefox and chrome that stuff is all hidden away
14:08 Mithaldu for really stupid reasons
14:09 Mithaldu like, you'd probably facepalm if i explained why
14:09 run4flat Mithaldu, I have seem icons like that before
14:09 run4flat I'm not seen it in my recent Firefoxen, though
14:09 Mithaldu yes, firefox had it until 2 years or so ago
14:09 run4flat vicash, PDL does not, in the present tense, support GPUs at all
14:10 Mithaldu actually, have a link: http://www.webmonkey.com/2011/01/firefox-4-dit​ches-the-rss-button-heres-how-to-get-it-back/
14:10 run4flat In principle, one could extend PDL's notion of vectorized operations and automatically parcel the work out into GPU threads
14:10 Mithaldu opencl?
14:10 run4flat again, in principle, it's possible
14:10 run4flat but it's not implemented
14:10 vicash yes opencl
14:11 run4flat And, some basic operations would have to be rewritten
14:11 run4flat like sequence
14:11 run4flat The problem is that PDL::PP has no notion of thread index
14:11 vicash actually sequence would not be parallelizable since it depends on the previous value
14:11 run4flat vicash, writing a paralellizable version of sequence is trivial if you knew the thread index
14:11 run4flat but PDL::PP doesn't supply that
14:12 run4flat anyway, I've thought about these things before and I've not seen an easy hack to implement PDL methods on GPUs
14:12 vicash ok... so sequence is already parallelized on the CPU ?
14:12 run4flat there may be a way to do it
14:12 run4flat but I am not aware of it
14:12 run4flat no, sequence is not parallelized on the CPU
14:12 run4flat like you said, its implementation uses the "previous" value
14:13 vicash i think more than sequence something that operates on sequence would be more parallelizable
14:13 run4flat plus, we only have pthread parallelization, which is expensive
14:13 run4flat well, sure, $a + $b could be easily parallelized
14:13 vicash any GPU usage is expensive too as at least iwth OpenCL you have to schedu;e memory movements across from CPU to GPU memory and back
14:13 run4flat right
14:13 vicash so the computations have to be > a threshold
14:13 run4flat so ideally you'd let your data live on the GPU the whole time
14:14 run4flat things like sequence would allocate data on the GPU instead of the CPU
14:14 run4flat for example
14:14 run4flat and then you'd copy the results to the CPU when you were all done
14:14 vicash that is a possibility.. but will become application dependent
14:14 run4flat yep
14:14 vicash the great thing about OpenCL is that you can install AMD's or Intel's OpenCL library and use that on CPUs for debugging
14:14 run4flat I think the better route would be to develop a GPU-based PDL look-alike
14:15 vicash but how would that work with existing code bases ?
14:15 * run4flat shakes head
14:15 * vicash withdraws earlier comment
14:15 vicash if someone wants GPU they should be willing to change code
14:16 vicash i have this open source software i wrote that you can use MPI to schedule OpenCL kernels across various systems wiht GPUs
14:16 run4flat nice
14:16 run4flat very nice
14:18 run4flat I would have liked to have dug into that
14:18 vicash https://github.com/vikasnkumar/wisecracker
14:18 run4flat but my work has taken me elsewhere
14:18 run4flat so to speak
14:19 Mithaldu vicash: suggesting a switch from c++ to C :)
14:20 vicash the code is in C actually.. the C++ is a wrapper around it for a C++ API.. so the software has both C and C++ APIs for users
14:20 Mithaldu oooh, ok, cool :D
14:20 vicash in fact even the MPI stuff is optional as not everyone wants it
14:20 run4flat vicash, that's pretty sweet!
14:20 vicash so that is also internally managed using a wrapper that gets selected at compile time so u can have standalone OpenCL scheduling of kernels and/or across systems
14:21 vicash everything is event based including results collection
14:21 vicash unfortunately I am using OpenCL events.. and the OpenCL event wait function pins the CPU 100% because both NVIDIA/AMD have bad implementations that use sched_yield() instead of using something like epoll() and a eventfd()
14:22 vicash that sucks if your kernel is slow but if not then it doesnt matter
14:22 run4flat wow
14:23 vicash the reason i did this is because if you have 10 machine each with a differently powered GPU then results come in a different order.. and my software will then load balance accordingly
14:25 run4flat vicash, you know quite a bit more about parallel computing and GPUs than I had realized!
14:25 run4flat my own work with GPUs has been with CUDA
14:25 run4flat I wrote CUDA::Minimal: https://github.com/run4flat/perl-CUDA-Minimal
14:25 run4flat which makes it easy to transfer data to/from the video card
14:26 vicash that's cool
14:26 run4flat but you have to write your kernels using Inline::C or plain XS code
14:26 vicash that's a good idea
14:27 vicash if one is thinking of improving marketing of PDL then having it support GPUs will be a killer feature.. esp since Altera is now supporting OpenCL on their FPGA boards
14:27 run4flat unfortunately, nVidia changed something in their headers between v3 and v5 of CUDA, and now it won't compile
14:27 vicash however it takes a lot of time and effort to do so
14:27 run4flat vicash, if I were doing high-performance computing for my research, I would probably be working on this problem
14:28 run4flat :-)
14:28 run4flat but I'm working on very simple things, things that don't need much horespower
14:29 vicash same here actually.. since i work for myself i dont have any projects that do GPU or where something like PDL can be used.. hence no work going on there
14:29 run4flat you're right, though. PDL support for GPUs would be awesome
14:29 run4flat from a marketing standpoint
14:30 gtodd run4flat:  for teaching thngs like showterm are useful ... I think there once was a way one could record a shell session with standard unix tools (script ttyrec) and convert it to a animated gif :)  ... I wonder if a "console record/playback"  feature would be useful for pdl shell
14:32 vicash run4flat: i am trying to compile your perl-CUDA-*  .. are you also getting the (?-xism:Success) mismatch ?
14:33 jberger_ joined #pdl
14:33 run4flat vicash, I haven't tried to compile it lately
14:34 run4flat but I have recently gotten some correspondence from somebody who has
14:34 jberger__ joined #pdl
14:34 run4flat To avoid any hassles, he installed an old version of Ubuntu on a thumb drive, and on that installed the older CUDA stuff
14:35 run4flat and it all worked well for him
14:35 vicash ok
14:35 run4flat btw, CUDA::Minimal depends on ExtUtils::nvcc
14:36 vicash i wil try it later on Amazon GPU system to see if it works.. yea i am getting ExtUtils::nvcc error
14:36 run4flat hmm... is the error with ExtUtils::nvcc?
14:36 vicash what i am curious about is how are you getting the Build test command to compile C code !! that is mind boggling to me
14:36 run4flat heh
14:36 run4flat yeah, ExtUtils::nvcc was something of a hack
14:36 run4flat but a brilliant one, if I may say so
14:37 run4flat that makes things like that work
14:37 run4flat :-)
14:37 vicash yea i will have to go through it
14:37 * vicash wonders if anyone has used PDL for Graph computations
14:38 run4flat not to my knowledge
14:39 vicash was just wondering since a graph can be represented as an adjacency matrix and then how fast would it be to walk a graph
14:39 run4flat yes... but it is usually *much* better to represent it as a sparse matrix
14:39 run4flat which PDL doesn't handle out-of-the-box
14:39 run4flat there is a module for sparse PDL matrices, though
14:40 vicash ok.. what is the module name ?
14:40 run4flat but I'm pretty sure nobody has build a Graph module on it
14:40 * run4flat checks
14:40 run4flat p3rl.org/PDL::CCS
14:41 vicash thanks
14:42 run4flat n/p
14:44 run4flat vicash, is this your site? http://selectiveintellect.com/
14:45 vicash yes
14:45 vicash is something wrong ?
14:46 run4flat no, I'm just really impressed
14:46 run4flat :-)
14:46 run4flat you've won DARPA grants??
14:46 jberger_ joined #pdl
14:47 * vicash feels embarassed..
14:47 vicash yes
14:47 run4flat embarassed? that's astouding! well done!
14:47 run4flat for what it's worth, I've never actually written a grant, let alone won one
14:48 vicash thanks. although it was through a special program called Cyber Fast Track they came up with in 2011 and closed it in April of this year
14:48 run4flat ah, so I missed my opportunity. :-)
14:48 * vicash got sick of working in finance and found a way out by watching darpa website
14:49 vicash no.. no no.. i was very frustrated a year ago thinking that so many contracts/requests are being put out in so many areas of interest and i am missing out,. but that is not the case
14:49 vicash the DoD comes up wtih this thing called SBIR/STTR every 3 months
14:50 vicash they put forth various technical problems that are difficult to solve and hve not been solved and are open to small businesses only
14:50 vicash and some are open to university folks as well
14:50 vicash so there is an opportunity every 3 months to find something of your liking and try
14:50 Mithaldu vicash: some of your blue text is links, some isn't
14:50 Mithaldu you should fix that ;)
14:51 vicash then there are the big grants wher u attend their one day talks and find partners
14:51 vicash Mithaldu: noted.
14:52 vicash Mithaldu: i just used a default template from Bootstrap.js so did not pay attention much
14:52 Mithaldu yeah, that explains it
14:52 Mithaldu and it's really just the "solutions" at the top, i think
14:52 vicash yea that was supposed to match the logo :)
14:53 vicash but i can change that to a link so it links to the Services page.. thanks for that idea :)
14:53 Mithaldu cheers :D
14:53 run4flat vicash, good to know
14:53 vicash yea.. dodsbir.net has lots of info although a badly designed site but it has info
14:53 run4flat I like high performance stuff, I just don't have any real reasons to use it much
14:54 * run4flat checks
14:54 vicash well most of the time it is a solution looking for a problem
14:54 run4flat heh, right
14:54 vicash run4flat: they also list a lot of physics problems.. btw last week they had released a new set of proposal requests..
14:55 vicash so u have 1 month to look and ask questions after which the proposal submission deadline begins and then ends in January
14:55 run4flat hmm
14:55 vicash this is a good chance for u to try to write something and see what happens .. they will give u feedback on it and u can then figure out how to raise money on your own.. since u r a Visiting Prof i am assuming u want to get tenured somewhere :) so fund raising is your big problem
14:56 vicash the next run will be in Jan-Feb when these STTR grants come out.. STTR unlike SBIR are forcing the small businesses to work with universities
14:57 run4flat yes, I need to be able to write grants
14:57 run4flat small liberal arts colleges don't place a huge emphasis on grants
14:57 run4flat but they are good nonetheless
14:57 run4flat I have concentrated on small liberal arts schools because I haven't proven to myself that I can get grant funding
14:58 run4flat but if I can, that opens up some new possibilities...
14:58 vicash well there is always a first time
14:59 vicash and what better way to start off than with the DoD .. they have unlimited amounts of funding and judging by the political climate in this country it will not go away
14:59 run4flat huh, the Army alone has 80 solicitations
15:00 run4flat heh, nope, not going to defund our armed forces
15:01 * vicash thinks that Army and Navy have more pure science related work than the others
15:01 run4flat yeah, it looked like i
15:01 run4flat Army had some basic material science stuff
15:02 run4flat wow, air force has 253!
15:02 Mithaldu what's a solicitation?
15:03 vicash a solicitation is basically a call for paper type thing
15:03 run4flat it's this: "Hey, we're going to give out money. Please apply with your proposal."
15:03 Mithaldu oic
15:03 vicash more like call for solution
15:03 Mithaldu right
15:03 vicash so multiple folks compete
15:03 Mithaldu a bounty
15:04 vicash a bounty in 3 phases
15:04 vicash Phase 1 given to say 10 teams depending on quality of solution, Phase 2 given to say 3 and Phase 3 given to 1
15:04 vicash so it is competition based throughout... bribing not allowed
15:04 Mithaldu oh, that's not stupid
15:05 run4flat hey, Air Force (http://dodsbir.net/solicitation/sbir141/af141.htm) solicitation 025
15:05 vicash it is expensive though.. most companies make no money in Phase 1 (barely break even or even take losses..) as they pay only $100K for that.. then Phase 2 gets you about 2 millions $.. then phase 3 can get you > 10 millions $
15:05 run4flat I bet I could work on that
15:06 vicash the total time of performance is 4 years with Phase 1 being < 1 year, Phase 2 >= 1 year and Phase 3 >= 2
15:07 Mithaldu honestly, that sounds pretty fair
15:08 * vicash thinks fairness is for the science folks, the business folks get to make deals and get money without fairness
15:10 vicash run4flat: please do not forget to contact the TPOC mentioned in the solicitation asking questions as that way they know who is really interested or not. A friend of mine who works for a company which has these SBIR grants as their business model gave this advice to me. However, I have never applied for an SBIR grant
15:11 run4flat vicash, thanks
16:29 gtodd run4cash:  a bit of clarification for my bug report ....
16:29 gtodd oops
16:29 gtodd run4flat: ! :)
16:30 gtodd or anyone .... if I run perldl -V  I get "perlDL shell v1.357"   for both 2.007 and 2.006  ... is this expected?
17:24 vicash yes seems like it. i get 1.354 for PDL version 2.4.11
17:37 run4flat gtodd, yes
17:37 run4flat the *shell* hasn't changed much
17:37 run4flat PDL itself has gone up a version number
17:38 gtodd k thanks
17:52 gtodd wait how does one match PDL's version versus the module/dist version again?  2.4.11 == 2.007
17:58 gtodd and 2.006 == 2.4.9 ?
18:10 run4flat gtodd, 2.4.X < 2.006
18:15 gtodd hmmm
18:24 gtodd ok I notice when I build the OS distributed port version on PDL for freebsd it installs 2.4.11 so folks using FreeBSD's ports system won't have seen the bug I'm noticing in 2.007
18:24 gtodd ok good :)
21:05 run4flat hey everyone, I'm off for the holiday!
21:05 run4flat happy Thanksgiving for all those in the US!
21:05 run4flat o/
21:28 jberger_ Happy thanksgiving run4flat
21:30 vicash happy thanksgiving to all
21:30 vicash left #pdl
22:39 jberger_ I should have said "to all" :-)

| Channels | #pdl index | Today | | Search | Google Search | Plain-Text | summary