Abstract
In large-scale proteomics studies there is a temptation, after months of experimental work, to plug resulting data into a convenient if poorly implemented set of tools, which may neither do the data justice nor help answer the scientific question. In this paper we have captured key concerns, including arguments for community-wide open source software development and "big data" compatible solutions for the future. For the meantime, we have laid out ten top tips for data processing. With these at hand, a first large-scale proteomics analysis hopefully becomes less daunting to navigate.
However there is clearly a real need for robust tools, standard operating procedures and general acceptance of best practises. Thus we submit to the proteomics community a call for a community-wide open set of proteomics analysis challenges-PROTEINCHALLENGE-that directly target and compare data analysis workflows, with the aim of setting a community-driven gold standard for data handling, reporting and sharing.
This article is part of a Special Issue entitled: New Horizons and Applications for Proteomics [EuPA 2012]. (C) 2013 Elsevier B.V. All rights reserved.
Original language | English |
---|---|
Pages (from-to) | 41-46 |
Number of pages | 6 |
Journal | Journal of Proteomics |
Volume | 88 |
DOIs | |
Publication status | Published - 2 Aug 2013 |
Keywords
- Software
- ALGORITHMS
- PHOSPHORYLATION NETWORKS
- QUANTIFICATION
- INFORMATICS
- Benchmarking
- Data analysis
- Open source
- Community challenge
- RESOLUTION
- MODEL
- TOOL
- Crowd sourcing
- YEAST
- MASS-SPECTROMETRY