Discovery Lead / Data Collector
In order to implement or improve a process you have to know what's going on in great detail. I have a particular appreciation for this having built simulations for most of my career. In order to build a model of a system you have to be able to accurately represent everything that makes it be what it is and do what it does. You learn what factors must be included and which can be safely excluded. You learn cause and effect and understand different relative magnitudes of different properties. You learn to use first principles and various abstractions.
I've collected data about systems in many different ways.
Effective data collection assembles detailed information that quantifies and characterizes the elements uncovered during the discovery process.
- Accomplish data collection through SME interviews, real-time and historical electronic data capture, video (with post-observation breakdown), on-site manual survey and checksheets, batch and sample measurements, calculations, interpolations, and logical deductions.
- Cross-reference collected data with requirements identified through the discovery process in both directions, which further ensures that no qualitative or quantitative information is missed.
- Process data to meet the needs of analysis or usage in code as appropriate.
- Verify the authority, veracity, and provenance of the acquired data. This mostly applies to data you did not collect directly.
Once the data is collected, organized, and verified it should be tracked continuously through all transformations leading to the generation of output data. This isn't part of data collection per se, but part of data analysis more generally.
I'm a big fan of embedding meta-information about the source data within the data structure and overall program flow. I discuss this concept here.