evidenceBy Simone RuggeriMarch 4, 2026

The Lancet Study (Shang 2005)

Shang et al. (2005) is the most frequently cited study against homeopathy. Published in The Lancet, it compared 110 homeopathy trials with 110 conventional medicine trials and concluded that homeopathic effects were consistent with placebo. The accompanying editorial declared "the end of homoeopathy."

Two decades later, the study deserves examination on two distinct levels. First, on its own terms: the internal critique, developed by Ludtke, Rutten, and others, demonstrates that the headline conclusion is fragile, non-transparent, and sensitive to analytical choices the authors neither pre-specified nor disclosed. Second, on terms the study never examined: the meta-analysis of randomized controlled trials is structurally incapable of evaluating individualized medicine. The editorial's confidence rests not on the strength of the data but on an unexamined faith that this particular instrument is the right one for the question.

The Objection

"The Lancet (Shang et al. 2005) said it was 'the end of homeopathy.'"

This is the single most common citation used to dismiss homeopathic research. The framing is straightforward: a major meta-analysis in a prestigious medical journal concluded that homeopathy performs no better than placebo, and the journal's own editorial called it the final word. If that is the whole story, there is nothing left to discuss. It is not the whole story. There are two stories the citation conceals: one about the study's own fragility, and one about the assumptions it never questioned.

Why This Matters

This study matters not because of where it was published but because of what it reveals. It is the point where the biomedical paradigm aimed its most authoritative instrument -- the meta-analysis of RCTs -- directly at homeopathy. The result tells us as much about the instrument as about the target.

The meta-analysis occupies the apex of the conventional evidence hierarchy. It is the most abstract form of medical knowledge: the maximum distance from any individual patient, any individual practitioner, any individual therapeutic encounter. Within the epistemological framework that produced it -- the Kantian inheritance that treats the observer's judgment as bias and the individual case as noise -- this abstraction is a virtue. The further from the individual, the closer to truth. The Shang study is the most forceful application of this principle to homeopathy.

The question is not merely whether the study was done well. The question is whether the instrument can see what it claims to measure. (For the philosophical framework behind this question, see How We Know What We Know.)

What the Internal Critique Shows

The Study Design

Shang et al. matched 110 randomized, placebo-controlled trials of homeopathy with 110 randomized, placebo-controlled trials of conventional medicine, pairing them by condition and certain study characteristics. Both sets were analyzed using funnel-plot techniques designed to detect small-study bias (Egger et al., 1997). The authors then progressively restricted their analysis to trials they classified as both methodologically high quality and large.

Two numerical results became central to the public narrative. When restricted to the final subset of 8 "larger, higher-quality" homeopathy trials, the pooled odds ratio was 0.88 (95% CI: 0.65-1.19) -- consistent with no significant effect beyond placebo. For the matched set of 6 "larger, higher-quality" conventional medicine trials, the odds ratio was 0.58 (95% CI: 0.39-0.85), showing a clear effect. The asymmetry between these two results formed the basis of the paper's conclusion: that homeopathy's clinical effects are compatible with the placebo hypothesis.

The Eight Trials Problem

Of 110 homeopathy trials, only 8 were retained in the analysis that produced the headline conclusion. These 8 trials were selected using criteria that were not specified in advance. The selection involved two thresholds applied sequentially: a quality threshold (based on the Jadad scale and allocation concealment) and a size threshold. Neither threshold was pre-registered or derived from a published protocol.

The identities of these 8 trials were not disclosed in the original publication. They were not listed in the paper, not in the online supplement, and not in any publicly available document at the time of publication. They were eventually identified only through Freedom of Information requests and subsequent re-analysis work by independent researchers.

An entire field of medicine was declared finished on the basis of 8 trials whose identities were secret, selected by criteria that were not pre-specified. This is an extraordinary level of opacity for any study, let alone one making a definitive negative claim about an entire therapeutic system.

The Sensitivity Problem (Ludtke and Rutten, 2008)

Ludtke and Rutten published a detailed re-analysis in the Journal of Clinical Epidemiology in 2008. Their starting point was simple: if the main conclusion relies on 8 trials drawn from a larger pool (including 21 trials classed as higher quality in Shang et al.), then we should examine how the result behaves under alternative but equally defensible subsets.

Their central finding was that Shang's conclusion was highly sensitive to the inclusion criteria. When they analyzed all higher-quality homeopathy trials, the pooled odds ratio was 0.76 (95% CI: 0.59-0.99) -- a statistically significant result favoring homeopathy. As the subset was progressively restricted toward larger studies, the pooled estimate shifted and the statistical strength diminished. Shang's 8-trial result (OR 0.88) appeared as one point along that sensitivity curve rather than as an inevitable endpoint.

In meta-analysis, when the conclusion depends on which of several defensible analytical choices you make, the honest conclusion is that the data are equivocal, not that one particular choice settles the question. Ludtke and Rutten demonstrated that Shang's final-subset approach was one reading of the data, not the only valid reading.

Their re-analysis also showed that the Shang conclusion was vulnerable to the exclusion or inclusion of a single trial. Removing one trial from the final 8 could shift the result to statistical significance in favor of homeopathy. Conclusions that hinge on one trial in a final subset of 8 are fragile, not definitive.

The Asymmetry (Rutten and Stolper, 2008)

Rutten and Stolper examined the post-publication data that had become available, including the identification of the 8 final trials and the sensitivity of the results to different quality assessments. They highlighted an asymmetry that the original paper left unexamined: both the homeopathy and conventional medicine trial sets showed funnel-plot asymmetry (consistent with small-study bias or other sources of heterogeneity), yet only the homeopathy set was interpreted as evidence of mere placebo response. The conventional medicine trials, subjected to the same analytical framework, yielded a positive result -- but on the basis of only 6 trials. The difference in interpretation was driven by the analytical structure and the authors' prior assumptions, not by a clear-cut difference in the data.

Contemporary Responses (2005)

The same year the Shang paper was published, Witt and colleagues submitted a letter to The Lancet raising methodological concerns. Their objections included the non-transparency of trial selection, the failure to pre-specify inclusion criteria for the final analysis, and the absence of any sensitivity analysis in the original publication. Multiple other research groups raised similar concerns in published correspondence (Walach et al., 2005; Linde and Jonas, 2005; Fisher et al., 2005), reflecting widespread unease with the paper's analytical path among researchers familiar with the underlying trial literature.

Summary of the Internal Critique

Even within the biomedical paradigm's own standards, the Shang study has three problems that its defenders have never resolved: non-pre-specified selection criteria, non-disclosed trial identities, and demonstrated sensitivity to subset composition. These are not minor technical objections. They are the kinds of problems that, in any other context, would prevent a study from being treated as definitive.

What the Epistemological Critique Shows

The internal critique is necessary but insufficient. It says: "You used your own methods badly." The epistemological critique says something different: "Your methods, even when used well, cannot see what they claim to measure."

The Meta-Analysis Cannot Evaluate Individualized Medicine

The RCT assumes the patient is an interchangeable unit. It takes a population sharing a diagnostic label and averages their responses to a standardized intervention. In homeopathic practice, each patient is an unrepeatable totality -- what Hahnemann called the Inbegriff -- and the remedy is selected for this patient based on this totality of symptoms. Two patients with the same diagnosis may require entirely different remedies. Averaging their responses to a single remedy is like averaging the responses of a German-speaker and a Japanese-speaker to a French novel and concluding that the novel is unintelligible.

The meta-analysis compounds this problem. It aggregates the results of multiple RCTs, each of which already averaged across populations. It is the most abstract possible form of medical knowledge -- the maximum distance from any individual patient, any individual practitioner, any individual encounter. To declare "the end of homeopathy" on the basis of a meta-analysis is to declare that a system of individualized medicine has been refuted by an instrument that systematically destroys individuality. The instrument cannot see what it claims to measure.

This is not an excuse. It is a structural observation about the relationship between method and object. The RCT is the right tool for evaluating standardized pharmaceutical interventions administered to diagnostically homogeneous populations. It is the wrong tool for evaluating a medicine whose entire logic depends on individualization, practitioner perception, and the dynamic totality of the case. Different paradigms require different methods of evaluation. (For why this is so, see How We Know What We Know.)

The Galileo Parallel

Paul Feyerabend's reconstruction of the Galileo case illuminates the structure of the situation. In the early seventeenth century, the evidence against Copernicanism was strong within the Aristotelian framework. The tower argument, the absence of stellar parallax, the apparent size of stars -- all of these refuted the heliocentric model by the standards of the established physics. Galileo was right, but for reasons that no methodology of his time could sanction. If he had followed the rules -- respected the evidence, abandoned refuted theories, used only validated instruments -- Copernicanism would have died in the cradle.

The demand that homeopathy prove itself by RCT is structurally identical to the demand that Galileo prove Copernicanism by Aristotelian physics. It is the demand that a new paradigm validate itself using the old paradigm's criteria -- criteria designed so that the new paradigm must fail. The RCT cannot detect what homeopathic practice knows, not because homeopathy lacks evidence, but because the measuring instrument is calibrated to measure something else entirely.

Feyerabend was precise about this:

"Being has no well-defined structure but reacts differently to different approaches." -- Paul Feyerabend, The Tyranny of Science

Reality is not a simple thing that one method can capture. Methodological pluralism is not a concession to weakness but a response to the abundance of Being.

What the Lancet Editorial Really Declares

The editorial titled "The end of homoeopathy" warrants direct attention -- not as an editorial overreach, which is how the internal critique frames it, but as a statement of faith.

When the editorial declares that "doctors need to be bold and honest" in telling patients that homeopathy has no benefit beyond placebo, it exercises what Feyerabend identifies as institutional power: one tradition using its control of prestigious platforms to suppress a rival tradition. Not by refuting homeopathy's evidence on its own terms, but by declaring that only one kind of evidence counts. The editorial does not argue that homeopathy's clinical observations, its 200-year materia medica, its case documentation, and its provings are wrong. It simply treats them as though they do not exist -- because within the epistemological framework the editorial inhabits, they are not evidence at all.

The editorial's certainty rests on an assumption it never examines: that a meta-analysis of RCTs constitutes the supreme form of medical knowledge, that statistical aggregation at maximum distance from the individual case is more reliable than the trained practitioner's direct perception. This assumption is not a scientific finding. It is the Kantian inheritance -- the conviction that human perception is unreliable and must be replaced by statistical machinery. It is a philosophical commitment that has hardened, over two centuries, into the unconscious metaphysics of Western medicine.

Massimo Scaligero's formulation applies with uncomfortable precision:

"Materialism is man's faith in matter, which he does not know how to experience through the concrete forces of thought. It is the most obscure mysticism, because it considers itself the opposite of mysticism." -- Massimo Scaligero, La Luce

The Lancet editorial is a moment of faith dressed in the language of science. Not bad faith -- genuine, sincere faith in the proposition that this particular way of knowing is the only way of knowing. But faith nonetheless. To call it a scientific conclusion is to mistake the category.

The Broader Context

The Shang study did not exist in a vacuum. By 2005, the accumulation of homeopathy trial data had reached a point where meta-analytic synthesis was both possible and necessary. The earlier meta-analysis by Linde et al. (1997) had found a pooled effect significantly favoring homeopathy, and a subsequent re-analysis by the same group (1999) with stricter quality criteria reduced but did not eliminate the effect. Shang et al. was, in part, a response to that earlier work.

What makes the Shang paper historically important is not its data, which was drawn from the same pool of trials available to other reviewers, but the editorial apparatus that surrounded it. The Lancet's decision to pair the paper with an editorial calling for the end of homeopathy transformed a contested meta-analysis into a cultural event. The paper became shorthand for a settled negative verdict in a way that its actual methodology does not support.

The Shang study is historically significant because it is the moment when the biomedical paradigm most explicitly declared its faith -- the conviction that a meta-analysis of 8 trials, selected by non-pre-specified criteria whose identities were kept secret, constitutes sufficient grounds to dismiss two centuries of clinical observation by trained practitioners. This is not a scientific conclusion. It is a confession of epistemological commitment.

Subsequent meta-analyses (Mathie et al., 2014; Mathie et al., 2017) using different methods have found small but statistically significant effects favoring homeopathy. But the deeper point is not which meta-analysis reaches the right answer. The deeper point is that no meta-analysis of RCTs can settle the question, because the instrument systematically destroys the information on which homeopathic knowledge depends -- the individuality of the patient, the judgment of the practitioner, the meaningful totality of the case.

What is needed is not more meta-analyses but research methods appropriate to individualized medicine: systematic case documentation, pragmatic whole-systems trials, n-of-1 designs, provings, and the systematic refinement of clinical perception through training and peer review.

Summary

Shang et al. (2005) fails on two levels. Internally, its headline conclusion -- that homeopathic effects are consistent with placebo -- emerged from a restricted subset of 8 trials (OR 0.88, 95% CI: 0.65-1.19), selected by non-pre-specified criteria whose identities were not disclosed. When Ludtke and Rutten (2008) analyzed the broader set of higher-quality trials, the pooled odds ratio was 0.76 (95% CI: 0.59-0.99), a statistically significant result favoring homeopathy. Different but equally defensible inclusion criteria produce different conclusions: the Shang result is one point along a sensitivity curve, not the only valid reading of the data.

Externally, the meta-analysis is structurally incapable of evaluating individualized medicine. It destroys the individuality of the patient by averaging, eliminates the practitioner's perception by blinding, and dissolves the meaningful totality of symptoms by standardizing. The Lancet editorial's declaration that this constitutes "the end of homeopathy" is not a scientific conclusion but a statement of faith -- faith in the proposition that statistical abstraction at maximum distance from the individual case is the highest form of medical knowledge. That proposition is a philosophical commitment inherited from Kant, not a finding established by evidence.

The data, evaluated within the meta-analytic framework, are equivocal. But the deeper question is whether the meta-analytic framework is the right court of appeal for homeopathic knowledge at all.

For the philosophical framework behind this analysis, see How We Know What We Know. For a parallel case study in how inclusion criteria determine conclusions, see our critique of the NHMRC report, which applied a similar meta-analytic framework with similarly fragile results. For related objections and how we address them, see the Skeptic Hub.

Frequently Asked Questions

What did Shang et al. (2005) actually conclude?

The paper concluded that the clinical effects of homeopathy are "compatible with the placebo hypothesis." This conclusion was based on a final analysis of 8 homeopathy trials classified as high quality and large (OR 0.88, 95% CI: 0.65-1.19), compared with 6 conventional medicine trials meeting the same criteria (OR 0.58, 95% CI: 0.39-0.85). When Ludtke and Rutten (2008) analyzed the broader high-quality subset, the pooled odds ratio was 0.76 (95% CI: 0.59-0.99), which statistically favors homeopathy.

Why were only 8 trials used for the final conclusion?

The authors applied sequential restrictions for study quality (Jadad score and allocation concealment) and sample size. These criteria were not pre-specified in a published protocol. The resulting subset was small enough that the conclusion was sensitive to the inclusion or exclusion of individual trials -- a fragility that Ludtke and Rutten (2008) documented in detail.

Were the 8 trials ever publicly identified?

Not in the original publication. The identities of the 8 homeopathy trials (and the 6 conventional trials) were obtained only through Freedom of Information requests and independent analysis. This opacity was a significant criticism raised by multiple research groups in published correspondence.

Does the Ludtke and Rutten re-analysis disprove Shang?

It demonstrates that Shang's conclusion is fragile. Different definitions of "large" or "high quality," applied with equal methodological justification, produce different results -- some favoring homeopathy, some not. The re-analysis shows that the Shang finding is one of several possible interpretations of the same underlying data.

Can a meta-analysis of RCTs evaluate homeopathy?

The internal answer is: this particular meta-analysis did so badly. The epistemological answer is: no meta-analysis of RCTs can evaluate individualized medicine, because the RCT systematically destroys the information homeopathic practice depends on -- the individuality of the patient, the practitioner's trained perception, and the meaningful totality of symptoms. Different paradigms require different methods of evaluation. For a fuller treatment of this argument, see How We Know What We Know.

How should I interpret the Shang study today?

As a revealing case study in how epistemological commitments shape the interpretation of data. On the internal level, the data are equivocal -- different analytical choices yield different conclusions. On the epistemological level, the meta-analysis is structurally incapable of detecting what homeopathic practice knows, because the measuring instrument is calibrated for a different kind of medicine. The Lancet editorial's certainty reflects not the strength of the evidence but the depth of a philosophical commitment.

References

  1. Shang, A., Huwiler-Muntener, K., Nartey, L., Juni, P., Dorig, S., Sterne, J.A.C., Pewsner, D., Egger, M. Are the clinical effects of homoeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy. The Lancet. 2005;366(9487):726-732. doi:10.1016/S0140-6736(05)67177-2.
  2. The Lancet (editorial). The end of homoeopathy. The Lancet. 2005;366(9487):690. doi:10.1016/S0140-6736(05)67149-8.
  3. Ludtke, R., Rutten, A.L.B. The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials. Journal of Clinical Epidemiology. 2008;61(12):1197-1204. doi:10.1016/j.jclinepi.2008.06.015.
  4. Rutten, A.L.B., Stolper, C.F. The 2005 meta-analysis of homeopathy: the importance of post-publication data. Homeopathy. 2008;97(4):169-177. doi:10.1016/j.homp.2008.09.008.
  5. Witt, C.M., Ludtke, R., Willich, S.N. Letter to the Editor regarding Shang et al. The Lancet. 2005;366(9503):2083.
  6. Egger, M., Davey Smith, G., Schneider, M., Minder, C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315(7109):629-634. doi:10.1136/bmj.315.7109.629.
  7. Linde, K., Clausius, N., Ramirez, G., et al. Are the clinical effects of homoeopathy placebo effects? A meta-analysis of placebo-controlled trials. The Lancet. 1997;350(9081):834-843.
  8. Linde, K., Scholz, M., Ramirez, G., et al. Impact of study quality on outcome in placebo-controlled trials of homeopathy. Journal of Clinical Epidemiology. 1999;52(7):631-636.
  9. Mathie, R.T., Lloyd, S.M., Legg, L.A., et al. Randomised placebo-controlled trials of individualised homeopathic treatment: systematic review and meta-analysis. Systematic Reviews. 2014;3:142.
  10. Mathie, R.T., Ramparsad, N., Legg, L.A., et al. Randomised, double-blind, placebo-controlled trials of non-individualised homeopathic treatment: systematic review and meta-analysis. Systematic Reviews. 2017;6:63.
  11. Walach, H., Jonas, W., Lewith, G. Are the clinical effects of homoeopathy placebo effects? The Lancet. 2005;366(9503):2081. doi:10.1016/S0140-6736(05)67877-4.
  12. Linde, K., Jonas, W. Are the clinical effects of homoeopathy placebo effects? The Lancet. 2005;366(9503):2081-2082. doi:10.1016/S0140-6736(05)67878-6.
  13. Fisher, P., Berman, B., Davidson, J., Reilly, D., Thompson, T. Are the clinical effects of homoeopathy placebo effects? The Lancet. 2005;366(9503):2082-2083. doi:10.1016/S0140-6736(05)67879-8.
  14. Feyerabend, P. Against Method: Outline of an Anarchistic Theory of Knowledge. 3rd ed. London: Verso; 1993.
  15. Feyerabend, P. The Tyranny of Science. Cambridge: Polity Press; 2011.
  16. Scaligero, M. La Luce: Introduzione all'Imaginazione Creatrice. Rome: Edilibri; 2005.