Changes between Initial Version and Version 1 of StoryConvertPhenoData


Ignore:
Timestamp:
2011-11-28T07:06:39+01:00 (13 years ago)
Author:
Morris Swertz
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • StoryConvertPhenoData

    v1 v1  
     1== As a LL data manager, I want to load data from publish layer into EAV (pheno model) ==
     2Scrum: ticket:1067
     3
     4Acceptance criteria:
     5
     6Status:
     7
     8To-do's:
     9
     10 * Add logging (so we can see what going on when it crashes in production environment, if it ever occurs)
     11   * Add Thread Monitor
     12 * How to handle/load/implement descriptive tables like LAB_BEPALING, this table is actually big list of Measurements with a lot of extra fields.
     13   * options:
     14     * Create a new type that extends Measurement and hold the additional fields
     15     * Merge data into the label of Category
     16 * How to handle/load/implement the table that describes which foreign keys are used between the tables.
     17   * The matrix viewer should know this info as well to build correct queries
     18 * Re-factor lifelines packages (it a little bit messy), remove old not used code anymore and place in descriptive packages
     19 * Remove JPA dependencies
     20   * Many-to-many in JPA are not working properly with Labels, for example ov.setTarget_Name("x"). In JDBCMapper this is solved, but know not where and how we could best do this for JPA. This set by label should also be put into generated test
     21   * Remove change org.molgenis.[wiki:JpaDatabase] interface to prevent this
     22   * //trick to prevent compilation problem in Hudson, should be changed!
     23   * this.em = ((JpaDatabase)db).getEntityManager().getEntityManagerFactory().createEntityManager();
     24   * Jpa reverse relation cause hudson to give compile error. Should be added to molgenis for non-jpa entities. And implement as @deprecated of throw unsupportedOperationException.
     25 * Update CSV readers to be multi threaded?
     26 * In (production) environment it's not a bad idea to put the java executable in the Oracle VM that part of the database.
     27 * Last but not least, Test if data is loaded correctly (Test from Anco).
     28 * We should make sure that the data is always loaded into the right format (this means that it always end up the right way in database).