1.6 KiB
author | categories | comments | date | header | slug | tags | title | omit_header_text | disable_share | wordpress_id | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
einar |
|
true | 2007-10-09T20:00:23Z |
|
soft-file-woes |
|
SOFT file woes | true | true | 298 |
Today I started working on a data set published on GEO. As the sample data were somehow inconsistent (they mentioned 23 controls when I found 28), I decided to parse the SOFT file from GEO in order to get the exact sample information.
I did a grave mistake. First of all, Biopython's SOFT parser is horribly broken (doesn't work at all) and quite undocumented: I could work around the lack of documentation (API docs) but not with the fact that it wouldn't work. So I turned to R, which offers a GEO query module through Bioconductor.
Again that proved to be a terrible mistake. For a file containing 183 samples, the analysis is going on since four hours and with no sign of completing anytime soon (not to mention a possible memory leak). After this, I gave up. I'm going to get the reduced data sheet and write a small parser in Python myself.
What is frustrating is the lack of quality: I could concentrate on my own work rather than reinventing the wheel for the nth time if the existing implementations worked. What's the point in releasing non-working software? I could understand bugs, but this is one step further.