logo.jpg (2912 bytes)

health_banner.jpg


     

July-Aug & Sept-Oct 2002
download.gif (450 bytes)

news_home.jpg
editorial1.gif
lead_story.jpg
briefs.jpg
book_review.jpg
campaign.jpg
letters.jpg
news_arcive.jpg
health_home.jpg
cse_home.jpg

join.gif
If you are interested in receiving the copy of the newsletter, do write to us. Join our nework.

 

lead_story1.jpg (1699 bytes)

hand.jpg
HEALTH STATISTICS

Death by numbers

Statistics (or the lack of it) leads to mismatched budgets, creates inequities and skews up health priorities

Strategies can evolve only when data is presented into information to forecast scenarios

Does the number of children who die before their first birthday decrease each year? How many children don’t get a square meal in India? Do malaria, dengue, cholera, leprosy and other forgettable diseases occur only in pockets sporadically every other year? Is AIDS undercontrol or is it increasing everywhere in India uniformly? All these answers are provided by statistics.

Statistics give a snapshot of societal trends, influencing and shaping public opinion and defining to a very large extent how a government will act. But can these numbers be trusted? Are they as objective as they are thought to be or can they be manipulated to serve a particular subjective interest? How can one spot bad statistics? How often are statistics concocted in the backrooms of hospitals and shady government offices? When do statistics become unreliable?

The Beginning
In America in late 1800s, immigration and health officials fed the public with imagined figures and stories of the problem of migrants who brought diseases and infections, crime, and prostitution and ate into America's prosperity. In order to counter this, analysts devised scientific methods to count births, deaths, and marriages, which tried to reflect the true health of the state. Those who conducted such numeric studies — came to be called statisticians and their "art", statistics. Over time, social research became more theoretical and more quantitative. As researchers collected and analysed their data, they began to see patterns and trends. When complexity in technique increased, the possibility of manipulation increased. Statisticians devised different methods to interpret the same data differently. Often, methods of assessment produced conflicting results. These arise because the results obtained from surveys are vastly different or the tools used to analyse the data produces different results (see box: Divide and rule).

Table: Divide and rule

The lack of standard protocols for assessment and bad measuring systems result in manipulated outcomes

Q. Does the mean incidence of cancers stay unchanged, increase or drop, using four different statistical methods?

 

Cancer incidence

  1995 1996

Cervical

100 200

Prostrate

200 100
  • Using arithmetical average

(100+200)/2 (200+100)/2

Mean incidence 150 150

A. Incidence of cancer remains unchanged

Arithmetical average of percentages, in the first period (or base year)= 100%

  1995 1996
Cervical 100% 200%
Prostrate 100% 50%
Mean incidence 100% 125%

A.
There is 25 per cent increase in incidence of cancer
  • Arithmetical average of percentages, in the second period (second year as base year)= 100%
     
Cervical 50% 100%
Prostrate 200% 100%
Mean incidence —> 125% 100%
  100%—> x = 80%

A.
There is 20 per cent decrease in cancer incidence

Geometrical average of percentages using either period
v(50% ¥ 200%) = v(200% ¥ 50%) = 100%

A. Incidence of cancer remains unchanged

Deadly Deception

Poverty
Poverty data globally is extremely poor and unreliable according to a recent paper by Sanjay Reddy and Thomas Pogge, economists from University of Columbia, New York. They have criticised the World Bank’s World Development Report, a respected document on poverty and other social data, because of its use of an arbitrary international poverty line unrelated to any clear conception of what poverty is. It employs a misleading and inaccurate measure of purchasing power "equivalence" that creates serious and irreparable difficulties for international and inter-temporal comparison of income poverty. It extrapolates incorrectly from limited data and creates an appearance of precision. The systematic flaws introduced by these three factors lead to a large understatement of the extent of global income poverty and to correct inference that it has decline. Says Sanjay Reddy, "Such estimates give a skewed global picture. The discrepancy of under-estimating poverty is larger for poorer countries like India, especially for poorer states within India. All reports that have shown that poverty has declined and the gap between rich and poor has decreased is based on flawed data and need to be re-examined".
1

Eminent economist Peter Svedberg of Stockholm University and author of Poverty and Undernutrition: Theory, Measurement and Policy (Oxford University Press, 2002) has severely criticised the many incorrect measures and yardsticks used by influential organisations like the Food and Agriculture Organisation. Data from such organisations has influenced food, hunger and nutrition policies and programmes in cou tries like India.2

Apart from international agencies like the United Nations and the World Bank, information on poverty in India is also estimated by national agencies like National Statistical and Survey Organisation (NSSO), Central Statistical Organisation (CSO) and Ministry of Rural Development, using different parameters. But here too the numbers and data are often flawed. Poverty is defined by income earned over a period of time. Most poor people however still subsist by making a living by extracting food and other items from forests, rivers and other sources. How does one account for people who live in non-monetised economies? How many such people access daily needs from these sources? How many of them are malnourished, vulnerable to diseases, or have access to health services? This essential information is not available to policy makers because it is never recorded.

A number of projects aimed at targeting poverty rely on information provided by these agencies. However, if these numbers themselves do not project a true picture of the nature and extent of poverty, any intervention that bases its objectives on these numbers is bound to fail.

Malaria
Malaria is a classic example of how the largest disease control programme in the developing world has been executed for over 40 years in the absence of data and quality information. According to the data provided by the National Anti Malaria Programme (NAMP), two to three million cases of malaria are reported every year. The World Health Organisation’s South East Asia Regional Office (WHO-SEARO) estimates that there are 15 million cases and 19,500 deaths in India annually, five times more than governmental estimates. This problem of unreliable information about malaria is not restricted to India. Globally, malaria incidence figures remain speculative. As in India, Thailand and Brazil have a fairly good surveillance system, yet only half the clinical cases are reported. A study suggests that figures from Africa represent only about 5 to 10 per cent of the total prevalence. The WHO estimates that the figures could be greater by as much as three-fold.

Depending on numbers for the control of malaria creates other problems too. The Annual Parasite Index (API) is a measure of the malarial parasite that is present in the bloodstream of a population. It is an indicator of the persistence of the malarial pathogen in the human blood across seasons. API figures reflect how many carriers of malaria exist in a community. Once this is determined for a large population, susceptible population can be identified. However, in case of a large management unit like a city or a district, if API varies widely in different pockets, pockets with high API get averaged out with pockets with low API. Areas of potential outbreak thus remain unidentified.

A large number of malaria cases are not reported; physicians prescribe anti-malaria regimen without blood tests; and private practitioners keep no records at all. Hence a large number of cases remain unreported. Procedures to gather data for grassroots workers are too arduous (see box: Counting conundrum).

Counting conundrum

Cumbersome reporting procedures lead to misreporting. The operational manual for the malaria action programme, published by the National Malaria Eradication Programme (NMEP), New Delhi reveals the complexity of these procedures. It provides broad guidelines for the different tiers of workers involved in malaria control for collecting data. Different forms need to be filled in by all the multipurpose workers (MPW), surveillance workers, health inspectors, technicians, zonal and district malaria officers. The forms cover the numbers of case, and examinations, family health registers, tour journal, monthly reports, positive and remedial steps taken, survey reports, spraying report, fever treatment depot forms etc. A separate set of forms is used for urban areas (which is covered under the Urban Malaria Scheme). In case of an epidemic, consequent follow-up reports are also sent in different proformas, making the entire process of reporting very tedious. Most reports are sent from the state office to the central office every six months. In case of an outbreak in a remote area such as villages in Assam or Orissa, a report takes anytime between a week to a fortnight to reach the National Anti Malaria Programme (NAMP) in Delhi. By this time, the outbreak becomes an epidemic.

Source: Directorate of National Malaria Eradication Programme 1995, Operational Manual for Malaria Action Programme (MAP), Ministry of Health and Family Welfare, New Delhi

 

Next Page Next Page | AIDS 1 2 3 4 5 6 7
past.gif (491 bytes)
 

DDT   CHILDREN AT RISK   ASTHMA  
POVERTY, HEALTH AND ENVIRONMENT

email.gif