Data in evidence-based decision-making is often used to measure performance, structure incentives, and allocate resources while informing public courses of action. Setting up an anti-corruption agency (i.e., allocation of resources and course of action) with the hope of deterring high-end corruption (structure incentives) because corruption is too rampant (performance) is one such example. Determining the course of action and allocation of resources is often political. On the other hand, the process of measuring performance, while political, is often hierarchical as the data is generated by international agencies, bureaus of statistics, and researchers. Even when the research process is done in consultation with community members, the ultimate interpretation and presentation of the analysis are often from the researchers’ cultural nuances, definitions, biases, or vested interests.
Bribery, for example, is the primary consideration when designing the bribery index and, in turn, is a primary determinant of the corruption rankings among countries in the world. In cultural contexts where it is informally acceptable to give money or gifts for a public service rendered, as a way of expressing gratitude, bribery as a measure of corruption becomes very problematic. It means that, as long as corruption is measured using the bribery index, some countries will never exonerate themselves from being the most corrupt countries in the world because their way of life integrates “transfer of value in the context of an official action.”
Selecting an indicator or a set of indicators to measure performance also means that many factors that perhaps make more sense to a community or tell the whole story as experienced by community members are also left out. Corruption seen through the prism of bribery or another definition like nepotistic appointments to public positions may not be a significant or a relevant measure of corruption in a communal culture like the Somalis in my country who select leaders based on clans. However, personal experiences of denial of essential services like access to health facilities or misuse of resources that cause harm to community members while unfairly enriching others may be a more relevant, tangible, and relatable definition of corruption to them. Therefore, in measuring corruption, the latter indicators should matter more than the former indicators to validate these lived experiences and contexts, whether I agree with them or not from a research point of view.
It is therefore important to remind ourselves that data is never neutral. Data can create illusions of rationality, yet beneath it, mask underlying fundamental differences of interpretation, purpose, and power among the various stakeholders situated on both sides of the spectrum, such as researchers (with an academic outlook) and communities (with lived experiences). Scarcity of resources is at the root cause of relying on other people’s data, hence depending on their interpretation of what is or is not important. This means that the real experts – those with lived experiences and whose perspectives should matter most in giving meaning to the data or indicators to measures performance are often ignored.
As a researcher, I must be committed to presenting the community’s outlook with honesty and respect. Taking human-centered approaches to data and research that: truly regards multiple community definitions and perspective; that seeks to have a collaborative as opposed to an extractive data relationships; that considers lived experiences as equally important as academic perspectives; that is empathetic to community welfare above researcher interests, and; that promotes social trust and community realities, must be embedded within research and data analysis if it is to lead to mutually benefiting and just evidence-based decision-making processes.
No comments:
Post a Comment