xQTL workbench - Run QTL mapping
Start by clicking Run QTL mapping in the main menu. This will take you to the central screen for starting new or view running analysis, in this case QTL mapping.
Starting a new job
Click Start new analysis after selecting the compute location. Local means starting it on the same computer where the application is currently running. Cluster means submitting it to a remote location where more computational power might be available.
Step 1:
- Give a name to the output data matrix. If it happens to exist, a timestamp will be added to make it unique.
- Select the kind of analysis you would like to run, e.g. Rqtl_analysis.
- Select the amount of parts this computational task should be divided into. For larger sets of traits, it is beneficial to set this to higher value, so many parts will run in parallel. However, if little power (CPUs) is available, a high setting may actually slow the task down. If you are in doubt, use the default of 5.
Step 2:
- Select the datasets to use for this analysis. When many datasets are uploaded and tagged as pheno- and genotype data, be careful to select the right combination.
- Select the parameters you would like to use.
- Start the analysis by pressing Start, this will take you to the running analysis view where you can follow its progress.
Note that if you cannot find back your dataset, or are unhappy with any of the parameters, a bioinformatician (or admin) can configure them for you.
Monitoring job progression
- In the Run QTL mapping screen, click on View running analysis. This will take you to the running analysis monitor.
- The status of all running subjobs can be seen here. The color coding indicates the status. Failed is red/-1, submitted orange/0, queued yellow/1, running blue/2, completed green/3.
- Provenance is kept by storing all analysis parameters in the database. This can be viewed by holding your mouse cursor over the text marked as hover.
- Provenance of completed jobs can be removed by clicking the delete button marked as . Note that this will not delete results already stored in the database.
- Failed sub jobs (statuscode -1) can be resubmitted by clicking on them.
- Note that you must have supplied your remote login credentials for resubmission of cluster jobs. (Needs a fix: currently this can only be done by submitting a new job)
- The refresh page box can be used to speed up or slow down the polling interval. (5, 15, 30, 60 seconds or off)
Result data
- When the analysis has finished running, click the given name of your result in the column Output to start browsing result.
- Renaming the result data afterwards will break this link and is not advised.
- The result data is stored in the same investigation as the first data matrix in the DataSet list used for this analysis.
Last modified 13 years ago
Last modified on 2011-09-07T16:09:17+02:00