Camelia, the Perl 6 bug

IRC log for #bioperl, 2010-07-15

| Channels | #bioperl index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:43 brandi joined #bioperl
01:43 brandi left #bioperl
03:45 dnewkirk joined #bioperl
05:22 bag joined #bioperl
09:14 deafferret joined #bioperl
09:18 dnewkirk joined #bioperl
09:18 kblin joined #bioperl
13:05 brandi joined #bioperl
13:11 brandi left #bioperl
14:34 wizi joined #bioperl
14:34 wizi Hi everyone, i need help with WRITTING a long text file using perl
14:34 wizi lets say I have to create and write a long text file (about 50 lines) BUT i dont want to write PRINT TEST line1, "\n" on every line. Is there a faster way to write this file??????????????
14:35 wizi thank you
14:35 kai wizi: look at perl HERE-documents
14:36 kai you can write those to a file as well
14:36 wizi where can i find it? Google?
14:36 kai or in a perl book :)
14:37 wizi thanks
14:38 kai basically it's like "print TEST<<EOF;" then write the text you want to have in the file, and then end that with EOF on it's own line
14:51 wizi Done. Thank you kai
15:02 kai no problem
15:29 jhamilton joined #bioperl
16:39 dnewkirk left #bioperl
16:50 * deafferret still can't find SPECIFIC info about 1000 genomes via AWS
16:50 * deafferret spits
16:51 rbuels AWS?
16:51 deafferret rbuels-bot: AWS is Amazon Web Services
16:52 deafferret botsnack
16:52 deafferret so you don't have to download 57TB
16:52 deafferret or whatever the total is
16:54 deafferret interesting. didn't know faceface was genetically predisposed to bedwetting until now
16:54 deafferret oh... he's not here.  bummer
16:56 * deafferret wonders if AWS access counts as billable bandwidth
16:57 rbuels ok, deafferret
17:00 deafferret botsnack
17:00 deafferret botsnack
17:00 deafferret botsnack
17:00 deafferret botsnack
17:04 rbuels thanks, deafferret
17:06 deafferret x 4
19:18 pyrimidine joined #bioperl
19:19 pyrimidine for anyone interested (except deafferret, who is allergic to mail lists)
19:19 pyrimidine http://lists.open-bio.org/m​ailman/listinfo/bioperl-dev
19:21 rbuels o noes, another mailing list
19:22 pyrimidine yep
19:22 rbuels it's a good idea, actually
19:22 pyrimidine low traffic (hopefully not no-traffic :)
19:22 rbuels hopefully.
19:24 rbuels god i am so sick of the tomato genome
19:24 rbuels that's pretty much all i've been doing for the past .... really long time
19:25 rbuels working on one thing or another related to it
19:25 * rbuels spews steam from his vents
19:25 newtobioperl joined #bioperl
19:25 newtobioperl has anyone ever installed a JBROWSE?
19:26 newtobioperl i am trying to install JBrowse but i have got some errors when trying to use bin/generate-names.pl -v
19:42 rbuels newtobioperl: probably the gmod-ajax mailing list would be the place to ask
19:42 rbuels newtobioperl: jbrowse isn't really related to bioperl
19:47 pyrimidine I've installed GBrowse 2 locally, will be installing JBrowse depending on how painful it will be
19:47 pyrimidine but that's not an immediate concern
19:51 newtobioperl good luck with Jbrowse
19:51 newtobioperl its a real pain
19:51 newtobioperl poor documentation
19:54 newtobioperl I use GS Mapper for HIV Clusters analysis
19:54 newtobioperl Couldn't do it with 8GB Human Genome :)
19:55 pyrimidine jbrowse is still pretty new, and still can't do everything that Gbrowse2 can.  But Lincoln has hinted it will be the eventual replacement from Gbrowse.
19:56 pyrimidine left #bioperl
19:59 flu newtobioperl, +1 to mailing gmod-ajax.  The main dev on there is very helpful and patient.  You should also let him know about the documentation problems.  He may not notice the problem.
20:00 deafferret send doc patches!  :)
20:00 newtobioperl i am on there now. Trying to read as much as poss
20:03 flu deafferret, now that would be a lovely gift.
20:05 newtobioperl deafferret, i am with you on this. So when are you going to send new docs lol???
20:05 newtobioperl Gbrowse is kind of slow
20:07 flu how so?  Which backend are you using and how many features
20:08 newtobioperl its quite alot
20:08 newtobioperl I use GS Mapper (454): query: HIV clusters (about 22k) to Human Genome (USCS 22 Chromosome)
20:09 newtobioperl then I got 13.8MB  txt FILE (from GS MApper)
20:09 newtobioperl parse that text file to GFF3
20:10 newtobioperl ****HIV Clusters (about 22000 queries)
20:11 newtobioperl Gbrowse couldnt load that GFF3 file
20:11 flu which adaptor?
20:11 newtobioperl I ended up cutting that GFF3 into 22 smaller GFF3 files
20:11 newtobioperl adaptor?
20:11 newtobioperl sorry i am new
20:12 newtobioperl what do you mean ADAPtor?
20:13 flu http://gmod.org/wiki/GBrowse_Adaptors
20:14 newtobioperl oh I use Bio::DB::SeqFeature::Store
20:14 newtobioperl db_adaptor    = Bio::DB::SeqFeature::Store db_args       = -adaptor memory
20:15 flu does 1 HIV cluster = 1 feature line in the GFF or did you use a parent/child hierarchy?
20:16 newtobioperl example of the first and 2nd line in my chr1.gff3
20:16 newtobioperl humangmapperhumanhiv1249226746...Name=human; humangmapperchr1145240339145240447...ID=match​-Cluster_10456;Name=Cluster_10456;Note=Query: CTGCCTAGAATAAAATGGGGAGCACAGAAGAGGAACACC​TGTATCAGTGGGTCTCTGAGAGCTTCTTCAGGAAAAATA​ATGCTTTTATCTGAGATTTGAAATATAGTAGGTGGAA, Human: CTACCTAGGATAAAAT-GGGAGCACAGAAGAGGAACATC​TGTATCAGTGGGTCTCTGAGAGCTTCTTCAGGAAAA-TA​ATGCTTT-ATCTGTGATTTGAAATATAGTAGG---AA from chr1;
20:17 newtobioperl 1st line:  humangmapperhumanhiv1249226746...Name=human;
20:17 newtobioperl 2nd line: humangmapperchr1145240339145240447...ID=match​-Cluster_10456;Name=Cluster_10456;Note=Query: CTGCCTAGAATAAAATGGGGAGCACAGAAGAGGAACACC​TGTATCAGTGGGTCTCTGAGAGCTTCTTCAGGAAAAATA​ATGCTTTTATCTGAGATTTGAAATATAGTAGGTGGAA, Human: CTACCTAGGATAAAAT-GGGAGCACAGAAGAGGAACATC​TGTATCAGTGGGTCTCTGAGAGCTTCTTCAGGAAAA-TA​ATGCTTT-ATCTGTGATTTGAAATATAGTAGG---AA from chr1;
20:17 flu pastebin.com is your friend
20:18 flu that doesn't look like valid GFF v3 to me.
20:19 newtobioperl http://pastebin.com/7RSLFBRL
20:19 newtobioperl all those sequences are NOTE
20:20 newtobioperl gbrowse is able to read that GFF, its just slow
20:21 flu right but that looks more like GFF v2 and the GBrowse adaptor you are using is for v3
20:22 rbuels newtobioperl: seems to me your first 3 columns should read something more like 'human_chr21 gmapper match'
20:22 flu I would either switch to the Bio::DB::GFF adaptor since you seem to be producing GFF v2 or switch to producing GFF v3 with the SeqFeature::Store adaptor.
20:23 rbuels newtobioperl: yeah, look at the gff3 spec: http://sequenceontology.org/resources/gff3.html
20:23 * deafferret spasms
20:23 flu producing valid gff v3 is probably the better long term solution
20:25 flu I've loaded ~10 million features into GBrowse and not had performance issues so this is almost certainly the cause of your problem and not a general issue with GBrowse.
20:25 flu 'this' being the wrong GFF format/adaptor combination
20:27 newtobioperl i am reading the gff3.html
20:27 newtobioperl thanks you
20:27 newtobioperl btw, do you know how to use the fasta ?
20:28 flu not sure what you mean
20:28 newtobioperl lets say i click on Cluster_XXXXX, Gbrowse is gonna give me the Cluster_XXXX Sequence
20:28 newtobioperl like that Yeast example
20:30 flu you can configure it to do that, sure
20:31 * flu wonders if anyone is going to check on deafferret
20:31 deafferret go on....   wi   wi    without me...    -gasp-
20:32 newtobioperl do you have an example somewhere?
20:32 newtobioperl show me how to config that?
20:33 newtobioperl I tried to put a fasta file in the same folder with gff file but it didnt show up
20:35 flu You can include sequence data in GFF v3 files and the SeqFeature::Store loader will correctly handle it for you.
20:36 newtobioperl just like I did right? I put the sequence data into NOTE
20:36 newtobioperl i am just wondering if there is another way to do it
20:36 flu no, see the GFF v3 spec
20:37 newtobioperl oh i seee
20:37 newtobioperl thanks
20:40 newtobioperl I just changed Adaptor to GFF. It does load faster
20:41 newtobioperl but I ll try to change my GFF to GFF3
22:44 bag joined #bioperl
23:26 dnewkirk joined #bioperl

| Channels | #bioperl index | Today | | Search | Google Search | Plain-Text | summary