Thursday, June 14, 2007

Billions of Deaths and Decades Later, Vested Scientists Rethink Dogma

How many billions of animals were poisoned in absolutely meaningless misleading horrifically painful toxicity tests since the time that honest scientists began evaluating animals as models of a chemical's toxicity in humans?

Too often (think always), when it comes to hurting animals, the scientific community is the most significant barrier to more humane practices. It all comes back to money, no doubt.

2007:

Report Calls for New Directions, Innovative Approaches in Testing Chemicals for Toxicity to Humans

Toxicity Testing in the Twenty-first Century: A Vision and a Strategy
Board on Environmental Studies and Toxicology (BEST)
Institute for Laboratory Animal Research (ILAR)

The goal of toxicity testing is to develop data that can ensure appropriate protection of public health from the adverse effects of exposures to environmental agents. Current approaches to toxicity testing rely primarily on observing adverse biologic responses in homogeneous groups of animals exposed to high doses of a test agent. However, the relevance of such animal studies for the assessment of risks to heterogeneous human populations exposed at much lower concentrations has been questioned. Moreover, the studies are expensive and time-consuming and can use large numbers of animals, so only a small proportion of chemicals have been evaluated with these methods. Adequate coverage of different life stages, of end points of public concern, such as developmental neurotoxicity, and of mixtures of environmental agents is a continuing concern. Current tests also provide little information on modes and mechanisms of action, which are critical for understanding interspecies differences in toxicity, and little or no information for assessing variability in human susceptibility. Thus, the committee looked to recent scientific advances to provide a new approach to toxicity testing." (pg 17.)
1984:

Of Mice, Models, and Men: A critical evaluation of animal research
Andrew W. Rowan
State University of New York Press, Albany.

Most of our knowledge of the toxicity of a new chemical and the associated riskes of of human exposure is derived from animal tests. However, animal tests are very insensitive. Even in a trial using as many as 1,000 animals and a statistical confidence of 90%, a negative result could still conceal a real tumor incidence of 2 tumors per 1,000 animals. If this incidence were applied to the human population of the United States, it would mean 400,000 cancers. Reducing this upper limit of risk to 2 tumors per 1 million would require more than 3 million animals. (Lowrance, W.W. 1976. Of Acceptable Risk. Los Altos, CA.: William Kaufman.) The suggested test for carcinogenic potential uses only 800 animals, but the test's sensitivity is increased by using very large doses, which is also a controversial solution. (pg. 194)
1976:

Melmon, K.L. The clinical pharmacologist and scientifically unsound regulations for drug development. Clin Pharmacol. Ther. 20: 125-9.
In most cases, the animal tests cannot predict what will happen when the drug is given to man. Standards for toxicology are are often set by officials, such as federal regulators, who are responding to the ill-advised but obviously well-intentioned legislators or consumer groups who may or may not be aware of the futility of increasing the amount of testing required when some tests often have no bearing on how man will respond to the drug. (In Rowan, pg. 200.)

No comments: