Version 2 (modified by 13 years ago) (diff) | ,
---|
As a LL data manager, I want to load data from publish layer into EAV (pheno model)
Scrum: ticket:1067
Acceptance criteria:
Status:
To-do's:
- Add logging (so we can see what going on when it crashes in production environment, if it ever occurs)
- Add Thread Monitor
- How to handle/load/implement descriptive tables like LAB_BEPALING, this table is actually big list of Measurements with a lot of extra fields.
- options:
- Create a new type that extends Measurement and hold the additional fields
- Merge data into the label of Category
- options:
- How to handle/load/implement the table that describes which foreign keys are used between the tables.
- The matrix viewer should know this info as well to build correct queries
- Re-factor lifelines packages (it a little bit messy), remove old not used code anymore and place in descriptive packages
- Remove JPA dependencies
- Many-to-many in JPA are not working properly with Labels, for example ov.setTarget_Name("x"). In JDBCMapper this is solved, but know not where and how we could best do this for JPA. This set by label should also be put into generated test
- Remove change org.molgenis.JpaDatabase? interface to prevent this
- trick to prevent compilation problem in Hudson, should be changed!
- this.em = ((JpaDatabase?)db).getEntityManager().getEntityManagerFactory().createEntityManager();
- Jpa reverse relation cause hudson to give compile error. Should be added to molgenis for non-jpa entities. And implement as @deprecated of throw unsupportedOperationException.
- Update CSV readers to be multi threaded?
- In (production) environment it's not a bad idea to put the java executable in the Oracle VM that part of the database.
- Last but not least, Test if data is loaded correctly (Test from Anco).
- We should make sure that the data is always loaded into the right format (this means that it always end up the right way in database).
- We should make sure that the data is always loaded into the right format (this means that it always end up the right way in database).