Evaluation – some general pointers
Generating information from data that can serve as a basis for decision-making seems more than simple at first glance.
However, on a more profound examination, this insight changes very quickly. No matter how many data, numbers, facts you have – it is only a part of the theoretical total amount of ALL existing facts. Furthermore, information is always collected with the help of certain technologies, procedures or from already existing data stocks. The methodology of the collection, the perspective of the observation, but also personal or social conditions and intentions play a decisive role, both in the collection and in the evaluation. In the Dibelius sentence, „I believe only a statistic, which I falsified myself!“, is far more truth, than one is generally ready to accept.
Without assuming a motive or intention to falsify, it is important to take a close look at the survey procedures in order to exclude or compensate for possible errors, misclassifications, omissions and so on.
Only when the greatest possible freedom from errors has been established (and even this statement is a subjective one!) can an evaluation of the available data/information yield an approximately objective picture.
However, another point also plays a role in this consideration – that of economy.
Increasing accuracy, greater freedom from errors, and a more complex (larger) amount of information usually also require a steadily increasing volume of capacity and thus, in part, exponentially increasing costs.
All of these issues must be taken into consideration when evaluating data. In short- everyone should be aware that the results do not represent an „absolute“ truth but are always only a, more or less, sharp reflection of the current reality.
The field of statistics as a subset of mathematics provides us with many excellent tools for interpreting data correctly, i.e., for peeling out the core of the puzzle. However, the larger the data sets become and the more diverse the facets that need to be considered are, and above all, the more different the weighting of the individual partial data sets is (both in terms of relevance and truthfulness), the more difficult the evaluation becomes. Finally, the interpretation of data and information is ultimately the core of any analytical research.
What is hidden in the mass of individual pieces of information? Where do you find repeating patterns and what do they look like? What correlations exist between the clusters? Which predictions can be made?
The answers to those questions are given by using special tools, which nowadays can also be found outside of statistics. Modern analytics approaches more and more the event horizon of the human brain and resorts to the procedures of the interaction of synapses. The general term used for this is that of artificial intelligence (although we are convinced that we are still a very long way from it!).
But storing (past) experiences and solutions and using them to find solutions in upcoming tasks faster is exactly the step in THAT direction. This is the path ARROWS is taking with its highly efficient analysis tools.