Prove it or lose it

When we seek medical attention or advice we expect our health providers to treat us based on the best science. After all, Australia spends about 9% of its entire GDP on healthcare, so we must be getting high-quality, science based care. Right?

At a Medical Journal of Australia sponsored summit on Clinical Trials in Australia, Professor Steve Webb of the Australia and New Zealand Intensive Care Society, described a study he performed in intensive care looking at all the treatments provided over a two day period, critically examining how many of these were based on good evidence. As this is the pointy end of medical care with the sickest patients, it is perhaps not a surprise that there were over 2000 different treatments provided over this time.

More surprising is the finding that only 5% of the treatments provided had good scientific evidence to support their use.

Let me say that again: 95% of the treatments provided to the sickest patients were not based on good evidence.

How is that possible?

Is this the sort of medical care the community expects?

Australia spends about $750 million on medical research annually through the National Health and Medical Research Council, with additional funding from other bodies and commercial organisations.

About half of this amount is spent on basic laboratory research, helping us better understand how the body works and malfunctions, and identifying and testing potential new treatments in cells and animals. The other half is split between different types of clinical or patient-based research. This includes epidemiology to help us understand disease patterns and risk factors, trials of new or established interventions and health services research exploring methods of providing health care.

Randomised clinical trials are the only reliable method of ascertaining whether a new or established treatment is truly effective and whether it is better than, or even equivalent to, established treatments.

Of the $750 million spent on medical research by the NHMRC, less than $100 million is spent on randomised trials. In addition, many of the funded trials are too small to be definitive or examining early stage potential new treatments.

As a result, the amount of money spent finding out whether the things that are routinely done in medical practice actually work is likely only a few tens of millions of dollars.

The question must be asked: is it appropriate for us to spend several hundred million dollars discovering new treatments, but only a few tens of millions of dollars testing whether these treatments work at all, or are better than other alternatives? Clearly the answer is no, but changing this balance will be difficult.

Australia is rightly proud of its very strong track record in laboratory and other medical research and has built up a substantial, world-class infrastructure and industry in this area. No-one would suggest this should be limited given the broad range of health and economic benefits that have resulted from this work for people not only in Australia, but around the world.

Rather, it is clear that we need to invest much more heavily than we are at the moment in clinical trials that test and compare treatments and strategies in development and those already in widespread use today.

Other countries have come to the same conclusion. The US Government has invested over a billion dollars in a new Center for Comparative Effectiveness Research, tasked with funding trials that compare treatments or approaches to important health conditions. The UK has created a new National Institute of Health Research, similarly focused on funding research that compares clinical practice strategies. Both have invested large, additional sums of money in this area, recognising the critical need for good evidence that can help define optimal clinical practice in a broad range of conditions.

Australia needs a similar approach.

At the MJA Clinical Trials Summit, speaker after speaker provided examples of studies that had changed the way treatment was given around the world, based on clinical trials conducted in and/or led from Australia.

The George Institute is a regional and global leader in this type of research. Some examples of completed and ongoing studies of this type conducted by the George Institute include:

  • ADVANCE- comparing two different blood sugar targets, as well as assessing blood pressure lowering, in people with diabetes
  • SAFE- comparing two different fluid regimens in people admitted to intensive care
  • INTERACT and ENCHANTED- comparing two approaches to lowering blood pressure in people with a haemorrhagic and ischaemic stroke respectively, with the latter also comparing two doses of tPA (a clot-busting drug)
  • Kanyini GAP and the SPACE collaboration- looking at a combination pill compared to usual care for cardiovascular protection
  • ACTIVE dialysis- comparing two different dialysis intensities in kidney failure
  • PRESERVE- assessing treatments to protect the kidney in people receiving contrast for x-rays
  • CHEST- comparing fluid replacement options in intensive care
  • PACE- comparing regular with as-required paracetamol in back pain

These studies typically range in size from a few hundred people to many thousands, and can be expensive as a result. Growing this type of research will require a substantial injection of funds, ideally in the hundreds of millions.

How might this happen?

Secretary of the Department of Health and Ageing, Jane Halton, made it clear at the Summit that there is no new pot of money available, so increased funding for these trials will need savings to be made in other areas.

Many trials comparing treatments will end up saving government large amounts of money. The SAFE trial, conducted by the Critical Care and Trauma Division at the George Institute showed that an expensive albumin solution was no better, and in some situations was clearly worse, than a cheap and plentiful saline solution. Many trials in this area have similarly shown the cheaper options to be equally or more effective than newer, expensive treatments. Unfortunately, it is not possible to take a cut of these potential savings in advance, as was advocated by some Summit participants.

Appropriately funding these sorts of trials will instead require Government, researchers and the medical profession to work together to identify budgetary savings that could be diverted to clinical trials. One possibility is that the funding for some of these expensive yet unproven interventions could be restricted, and instead channelled into trials that can reliably define whether they actually work.

We currently have a great opportunity to correct this imbalance. At the Clinical Trials Summit, representatives of government, academia and the medical sector demonstrated a clear alignment of goals, and a strong commitment to change.

The time is right to convert this into direct action that improves health care for all.

See reference at MJA InSight.