Forwards-Looking Forum

1st December 2015

Venue: Lecture Theatre, Royal Statistical Society, London

Target audience: Statisticians

Programme:

  • 09:30-10:00 - Registration and tea/coffee
  • 10:00-10:10 - Welcome address
  • 10:10-10:40 - Simon Day (Clinical Trials Consulting & Training): New Methods? New Thinking? Day
  • 10:40-11:10 - Aaron Dane (AstraZeneca): The use of limited data to achieve regulatory drug approval: Options for Antibiotic Drug Development Dane
  • 11:10-11:25 - Coffee
  • 11:25-11:55 - Martin Posch (Medical University of Vienna): An extrapolation framework to specify requirements for drug development in children Posch
  • 11:55-12:10 - Kit Roes (University Medical Centre, Utrecht): a trial design for rare diseases: Examples of progress from the Asterix project. Roes
  • 12:10-12:25 - Ralf-Dieter Hilgers (RWTH Aachen University): Integrated Design and Analysis of Clinical Trials in Small Population Groups (The IDeAl Project) Hilgers
  • 12:25-12:40 - Sofía Villar (MRC Biostatistics Unit Cambridge): Bringing patient population size into the clinical trial design using response-adaptive randomisation Villar
  • 12:40-13:00 - Discussion
  • 13:00-14:00 - Lunch
  • 14:00-14:30 - Nick Catlin (Action Duchenne): Unblocking the drug pipeline – treating Duchenne Muscular Dystrophy. Catlin
  • 14:30-15:00 - Tim Morris (MRC Clinical Trials Unit London): A framework for the design and analysis of phase III randomised trials when large-scale trials are not possible Morris
  • 15:00-15:30 - Marie-Cécile Le Deley (Institut Gustave Roussy): Randomised controlled trial designs in the setting of rare diseases: evaluation of a series of trials over a long-term research horizon Le Deley
  • 15:30-16:00 - Coffee and poster session
  • 16:00-16:15 - Nigel Stallard (University of Warwick): Recent advances in methodology for clinical trials in small populations: the InSPiRe project Stallard
  • 16:15-16:30 - Lisa Hampson (Lancaster University): Bayesian methods for the design and interpretation of trials in rare diseases
  • 16:30-16:45 - Discussion and closing statements

Abstracts

Simon Day (Clinical Trials Consulting & Training) - New Methods? New Thinking?

This talk will serve as an introduction to the day.

We have options when we want to get evidence about therapies in rare disease:

  • option 1 is to simply throw our arms up in despair and say that there is nothing we can usefully do;
  • option n is to recognise that an indication is classed as "orphan" when there are up to about 525,000 patients in Europe, an additional 200,000 patients in the US, and a whole load more in other places too – so what's the problem?

Between these extremes lie difficult problems to which patients desperately need solutions.

Do we need new statistical methods? Probably yes; but then we do for diabetes, ischaemic heart disease, malaria, and so on. But in rare diseases, we have opportunities that we don't always have elsewhere: not least, there may be nothing else out there that works; all we have to do is show we are better than placebo. That's not always so hard. I will try to expand on other opportunities too.

By the way – don't forget the elephant in the room… safety data. Statisticians have often not been to the fore when evaluating safety data (although there are notable exceptions and the scene is changing). So what's the solution to this problem, if there is one?

Nicky Best (GlaxoSmithKline) - Putting the 'B' words into small clinical trials – the role of ‘Biology’ and ‘Bayes’ in trials in small populations

Pharmaceutical companies carry out clinical trials to provide evidence to inform decision making about asset progression and medicines development, and ultimately to support submissions to regulatory agencies who will decide whether or not to license a product. It is important to distinguish between the statistical evidence provided by a trial (i.e. ‘what the data say’), and the decisions we make based on this evidence. Whilst we can (and do) make wrong decisions, we cannot have wrong evidence (assuming the trial has been conducted correctly). However, evidence can be misleading and it can be weak. Whilst the probability of observing strong misleading evidence can be shown to be small, there are no such bounds on the probability of observing weak evidence.

Weak evidence is particularly a problem for small clinical trials since the probability of weak evidence is directly related to sample size. And in the face of weak evidence, the probability of making a wrong decision based on such evidence clearly increases. So what can we do to improve the strength of evidence provided by small clinical trials, and how can we reduce the chances of making incorrect decisions based on such evidence? In this talk, I will argue that better use of the ‘B’ words - ‘Biology’ and ‘Bayesian’ thinking - offer a credible way forward:

  • Biology – we need a very well-defined biological hypothesis and a precise mapping of that hypothesis to the endpoints being measured in the trial. We must also avoid posthoc rationalisation and the generation of pseudo-hypotheses on the basis of results from small trials.
  • Bayes – Bayesian methods provide a natural framework to monitor accrual of evidence for or against a well-defined hypothesis, either within a single trial or as a formal tool to quantify the accumulated evidence to date. Bayesian inference cannot, by itself, ‘solve’ the problem of weak evidence from small trials – typically, the prior distribution about the hypothesis of interest is either so strong that weak evidence from a small trial won’t shift it by much, or so weak that it adds virtually nothing to the weak evidence from the trial. However, recent methodological advances to develop robust historical priors that dynamically down-weight prior information relative to current evidence-based on the observed discrepancy between them may offer a way forward.

I will illustrate some of these ideas using some recent pharmaceutical examples.

Ralf Herold (European Medicines Agency) - Regulatory perspectives on trials in small populations

Kit Roes (University Medical Centre, Utrecht) - Trial design for rare diseases: Examples of progress from the Asterix project

Clinical research designs to study new drugs and treatments for rare diseases face the fundamental challenge that they are badly needed to evaluate treatments for often devastating diseases but are severely limited in the numbers of patients that can be recruited within a reasonable timeframe. This may require more from statistical methodology and statisticians than “squeezing out” additional efficiency in conventional or existing improvements in trial designs for large diseases. The Advances in Small Trials dEsign for Regulatory Innovation and eXcellence (Asterix) project includes the following:

  • (Quantitative) methods to include patient-level information and patient perspectives in design and decision making throughout the clinical trial process.
  • Statistical design innovations for rare diseases in individual trials and series of trials.
  • Re-consideration of the scientific basis for levels of evidence to support decision making at the regulatory level.
  • A framework for rare diseases that allows rational trial design choices.
  • Validation of new methods against real-life data and regulatory decisions to improve for regulatory decision making.

In this presentation, we will focus on approaches to improve design whilst taking into account the large uncertainty in estimates of variability and heterogeneity.

Ralf-Dieter Hilgers (RWTH Aachen University) - Integrated Design and Analysis of Clinical Trials in Small Population Groups (The IDeAl Project)

The ability of conventional statistical methods to evaluate new therapeutic approaches for any given rare diseases is limited due to the small number of patients concerned. This means that established statistical approaches to demonstrate the efficacy and safety of therapies may fail in this situation. Thus, there is an urgent need not only to develop new therapeutic approaches to treat diseases but also to develop new statistical methods in order to establish which approaches work. This is the point of departure for IDeAl ("Integrated Design and Analysis of small population group trials (SPG) ") research project, which aims to use and bring together all possible sources of information in order to optimise the process.

The talk will give an overview of the IDeAl project and will then focus on the concept of selection of the most appropriate randomization procedure to improve the validity of trial designs.

Sofía Villar (MRC Biostatistics Unit Cambridge) - Bringing patient population size into a clinical trial design using response-adaptive randomisation

The rise of the randomized clinical trial 70 years ago transformed medical research into science. Since then the focus of traditional clinical trials is to learn which therapies will benefit future patients. This goal, as codified in the 1979 Belmont Report, draws a clear distinction between clinical research and clinical practice. Clinical practice aims to treat patients as effectively as possible while clinical research focuses on providing controlled statistical evidence to support decisions that affect future patients. This separation between research and practice is applied in the same way for all diseases, regardless of how common they are. From this traditional perspective, sample sizes for clinical trials are determined based mainly on statistical power and type I error considerations. For common conditions, this paradigm results in the largest patient benefit from a population-based view. However, developing therapies in rare diseases through this traditional mould does not lead to the largest patient benefit in the whole population. This is in part because this approach requires sample sizes that are several times larger than the number of patients in the world known to have a condition. This means that even if the population would be just as large as the trial size needed, its results would benefit very few people (if any) when they are finally available.

A way out of this conundrum is to take a Bayesian approach to develop a trial design that uses the information about the population size to determine how much research should be done in order to maximise the health of all the patients in it. Such an approach can preserve the scientific advantages of randomisation if implemented by means of a response-adaptive randomisation procedure. Response-adaptive randomisation assigns more patients to the better-performing arms by aligning the randomisation probability with treatment efficacy as information about it accumulates. In this talk, I will present a Bayesian response-adaptive randomisation procedure that randomises patients among the available treatments based not only on the outcome data from treated patients but also on the expected evolution of outcomes from patients that are still to be treated in the trial. I will argue that such design offers a way to adaptively balance the conflicting goals of statistical power (medical research) and patient benefit (medical practice) for given population size.

Nick Catlin (Action Duchenne) - Unblocking the drug pipeline - treating Duchenne Muscular Dystrophy

Having a child diagnosed with a rare life-limiting condition like Duchenne Muscular Dystrophy (DMD) is not what any of us wants when we start a family and certainly not what you expect. It's a shock and knocks you back. You can't help thinking about what might have been or worrying about all the care you are going to have to give and if you are really up for that challenge.

Duchenne is relentless and it is heartbreaking to see your child gradually lose all their muscle function. It's like living with a slow death sentence. It can also come with high risks of cognitive disorders like autism, ADHD and dyslexia at a time when you think it can't be all that bad! The battles are many and mostly taking on a society still not able to accommodate young people with disabilities. The social barriers and discrimination are widespread from not being able to use public transport to being sidelined in special education and not getting jobs.

But of course, young people are wonderful and their spirit and resilience is a lesson to us all. Many overcome their disabilities to travel abroad, take up college degree courses and PhDs, get married and have kids of their own. With improved medical care, the guys are living longer into their 20's and beyond.

There is no doubt that our sons and our community want to see the end of this disease. We do not want future generations to have to endure the effects of genetic conditions like DMD. It's now not just a hope that medicines can be found as new research, clinical trials and drugs coming to market have the potential to at least slow the progression of Duchenne. However, drug development programmes are way too slow and very costly. Despite intensive lobbying, research and clinical groups, pharmaceutical companies, regulatory authorities and Government health systems have not understood or failed to work collaboratively to bring about cost-effective ways to make these new genetic medicines available to patients. The clinical trial and drug development process for rare genetic conditions need an urgent rethink.

Nick Catlin has a son Saul who is 15 and living with Duchenne Muscular Dystrophy. Nick was a founding member and CEO of the Charity Action Duchenne and he is now working with younger children, schools and their families to develop specialist programmes of support and best educational outcomes for the non-profit Decipha CIC.

Aaron Dane (AstraZeneca) - The use of limited data to achieve regulatory drug approval: Options for Antibiotic Drug Development

At present, there are situations in antibiotic drug development where the low number of patients with key problem pathogens makes it impossible to conduct traditional, fully powered trials. This talk will outline statistical issues regarding the application of alternative techniques, balancing the unmet need with the level of certainty required in the approval process, along with the use of additional sources of data critical to improving feasibility. The identification and quantification of risks associated with these approaches will be important in order to perform an informed review of new treatments, whilst maintaining the feasibility of developing drugs to treat the most concerning pathogens.

Tim Morris (MRC Clinical Trials Unit, London) - A framework for the design and analysis of phase III randomised trials when large-scale trials are not possible

There is a methodological gap between common diseases, where large trials can be undertaken, and truly rare diseases, where increasingly trial designers perform a complete paradigm shift in terms of methods. How should we approach the design of trials when we can feasibly get some, but not all, of the way to the numbers required for a traditional phase III trial? We present a framework for designing trials to fill this gap between common and rare diseases.

Staying with the frequentist approaches that are well understood and accepted in large trials, we consider several alterations to the design parameters. Put together, these can provide important reductions to the required sample size. The obvious parameters to consider are the type I error rate, power and targeted effect size, bearing in mind the consequences of making a wrong decision about treatment. Other aspects are: choice of outcome measures, leveraging covariate information, skewing the allocation ratio, re-randomising previous participants and borrowing external information. We emphasise the benefits of some of these changes and caution against others. We illustrate the savings that can be achieved without requiring a complete move away from the paradigm that is established and widely accepted in more common diseases. In planning trials, we propose that investigators use this framework before resorting to complete change of inferential paradigm.

Martin Posch (Medical University of Vienna) - An extrapolation framework to specify requirements for drug development in children

A fully independent drug development programme to demonstrate efficacy in small populations such as children may not be ethical, especially if the information on the efficacy of a drug is available from other sources. In a Bayesian framework and under the assumption of successful drug development in adults, we determine the amount of additional evidence needed in children to achieve the same confidence for efficacy as in the adult population. To this end, we determine when the significance level for the test of efficacy in confirmatory trials in the target populations can be relaxed (and thus the sample size reduced) while maintaining the posterior confidence in effectiveness. An important parameter in this extrapolation framework is the so-called scepticism factor that represents the Bayesian probability that a finding of efficacy in adults can not be extrapolated to children. The framework is illustrated with an example.

Nigel Stallard (University of Warwick) - Recent advances in methodology for clinical trials in small populations: the InSPiRe project

The Innovative Methodology for Small Populations Research (InSPiRe) project is one of three projects funded under the EU Framework Programme 7 call for New methodologies for clinical trials in small population groups. The project brings together experts in innovative clinical trials methods with the aim of enabling rapid evaluation of treatments whilst maintaining scientific and statistical rigour.

This talk will be in two parts. The first part will briefly outline the work of the InSPiRe project, giving an overview of the four main work packages on early dose-finding trials, decision-theoretic designs, confirmatory trials and personalized medicines, and evidence synthesis in the planning of clinical trials in small populations. The second part of the talk will describe in more detail work on a decision-theoretic approach to designing a trial taking account of the population size. We optimise a utility function that quantifies the cost and gain per patient and shows that the optimal trial sample size is asymptotically proportional to the square root of the population size. The method is illustrated using an example in which the asymptotic sample sizes obtained are compared with the exact optimal sample size, showing that the approximations can be reasonable even in relatively small trials.

Marie-Cécile Le Deley (Institut Gustave Roussy) - Randomised controlled trial designs in the setting of rare diseases: evaluation of a series of trials over a long-term research horizon

Several research projects are currently addressing the issue of clinical trials in rare diseases. Adaptive designs and Bayesian trials are frequently presented as appealing approaches in this setting. Our own research challenges the issue of the level of evidence requested in clinical trials when using a frequentist approach. This was evaluated through a simulation framework considering a series of two-arm superiority trials over a 15-year period. The design parameters examined were the α-level and the number of trials conducted over the 15-year period (thus, trial sample size). Different disease severities and accrual rates were considered. The future treatment effect was characterised by its associated hazard rate; different hypotheses of how treatments improve over time were considered. We defined the total survival benefit as the relative difference of hazard rate, at year-15 versus year-0. The optimal design was defined by maximising the expected total survival benefit, provided that the risk of selecting at year-15 a treatment inferior to the initial control treatment remains below 1%. Compared to 2 larger trials with typical one-sided 2.5% α-level, performing a series of small trials with relaxed α-levels leads on average to larger survival benefits over a 15-year research horizon, but also to a higher risk of selecting a worse treatment at the end of the research period. Under reasonably optimistic assumptions regarding the future treatment effects, optimal designs outperform traditional ones when the disease is severe (baseline median survival < 1 year) and accrual is ≥ 100 patients/year, whereas no major improvement is observed in diseases with a better prognosis. Trial designs aiming to maximise survival gain over a long research horizon across a series of trials are worth discussing in the context of rare diseases. Our simulation framework offers enough flexibility to evaluate more complex innovative designs on a long-term research horizon.

Lisa Hampson (Lancaster University) - Bayesian methods for the design and interpretation of trials in rare diseases

For studies in rare diseases, the sample size needed to meet a conventional frequentist power requirement can be daunting, even if patients are to be recruited over several years. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose Bayesian approaches for the conduct of rare disease trials comparing an experimental treatment with control when the primary endpoint is binary or normally distributed. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy can be used to establish Bayesian priors for unknown model parameters. As sample sizes are to be small it is possible to compute all possible posterior distributions of response rates and to summarise the range of outcomes. The frequentist error rates of the Bayesian design can, therefore, be computed exactly. Consideration of the extent to which opinion can be changed, even by the best feasible design, can help to determine whether such a trial is worthwhile. We illustrate the proposed methodology by describing applications to Bayesian randomised trials for childhood polyarteritis nodosa and chronic recurrent multifocal osteomyelitis.