ICT indicators provide a snapshot summary of information about projects, countries or regions. The vantage point of the snapshot provides an indication of who is taking the picture and what is being identified as important – or not. By way of example, a security firm could develop a risk indicator for retail stores taking into account such factors as the number of entry points to the store, how many security cameras there are, timer locks on the store safe, bars on windows, and background check protocols for hiring staff. Such an indicator would purport to advise on the likelihood of the store being targeted for robbery and being successfully robbed.
The security indicator could then be used by insurance firms to assess insurance risk; by security firms to assess where they need to apply their efforts to reinforce the existing security system; and by potential thieves to pinpoint security weak points. Conversely, for example, the owner of the enterprise might also use the security indicator (perhaps without divulging its constituent statistical elements) as supportive evidence for claiming that the business is not risky to potential investors. This would be a misleading use of the indicator, as investors are looking for a different kind of security, or at least a broader definition of security. However, the indicator does not provide any kind of evidence on the likelihood of the owner using the store for laundering money, or under-reporting earnings for the purpose of tax evasion.
This kind of issue also arises in using indicators for advocacy. As will be discussed further below, indicators are not neutral and express different things. The fact that the providers of a particular set of indicators are from a different side of the fence does not mean that their data or methodology is necessarily corrupt, flawed or bad. We can assume, nonetheless, that there are different reasons for devising indicators, which may have a different focus, and thus may come at the data from a different perspective. Despite agreement on the importance of ICTs there is no sweeping consensus on approaches or conceptual models. What are the most salient aspects that will demonstrate progress? And what kinds of progress? Do we measure simply the incidence of infrastructure and technology penetration? Or do we go further to also include data to document economic progress and social progress?
Indicators are an abbreviated language or device: they point, but do not explain. So it is useful to know who is doing the pointing, as well as their motivation for pointing in the first place, and the evidence used to legitimise their authority to point convincingly. Often, we accept the authority of many indicators without delving into their methodologies. Overall, indicators must be understood as value-laden and not neutral. They provide a snapshot of progress in the context of the particular world view of their creators and contain their own inherent values.
Indicators can contribute to three main aspects of ICT policy development:
- Needs assessment
- Monitoring progress in different economic and social sectors
- Providing evaluation and feedback for specific programmes and initiatives.
Indicators are essential for setting policy priorities, measuring progress towards targets, and benchmarking results. Thus, indicators can also be viewed as having a definitional function in terms of setting the parameters of the problem to be addressed. The decision of which indicators are important to collect provides evidence of what is being valued. The definition, design and measurement underlying indicators must be effected in reference to how they are intended to be used. Otherwise, indicators can be false and misleading measures. This underlines the importance of policy advocates being proactive in defining which indicators are important.
One of the most obvious examples is that it is only recently that statistics and indicators that are disaggregated by gender have been viewed as essential in mainstream practices – although it has long been known that women and girls typically do not have the same level of access to training and technology as boys and men. Without this kind of statistical information about access levels between the sexes, no real targets can be set, and realistic strategies for achieving their success cannot be devised. In addition to gender, there are also many instances of the already marginalised not being counted in statistical indicators. The excuse or claim is that they are difficult to include for a variety of reasons. Advocacy groups working at the grassroots level are particularly well situated to contribute to the stemming of this oversight where it occurs, giving the marginalised a voice – or at least a number.
Indicators can serve an advocacy function in support of demands around national level policy-making; to illustrate a basis for universal service projects; to lobby for a particular change in regulation; and so forth. There are international conventions for national level collection of data to report on a variety of socio-demographic phenomena such as population, health, educational attainment, and economic performance (among others). These data are used comparatively and across time to inform policies, target programmes, and guide investment decisions. Data about technology penetration and use are increasingly being used to form part of this picture.
Data are used in collections to form indicators. Indicators are an interpretation of the data and provide a snapshot of the assessed terrain from the perspective of what we want to show. Thus, if we consider the information society as mainly being concerned with access to technology, we will build an indicator that balances data about population, penetration of infrastructure and the cost of using it. The change in the indicator over time will provide feedback on policy performance as illustrated in Figure 1. The next section considers the practical challenges of moving along the spectrum from data collection to indicators.