# MBNMAtime for time-course Model-Based (Network) Meta-Analysis

## Introduction

This vignette demonstrates how to use MBNMAtime to perform meta-analysis of studies with multiple follow-up measurements in order to account for time-course relationships within single or multiple treatment comparisons. This can be performed by conducting Model-Based (Network) Meta-Analysis (MBNMA) to pool relative treatment effects.

Including all available follow-up measurements within a study makes use of all the available evidence in a way that maintains connectivity between treatments and explains how the response of the treatment changes over time, thus accounting for heterogeneity and inconsistency that may be present from “lumping” together different time points in a standard Network Meta-Analysis (NMA). All models and analyses are implemented in a Bayesian framework, following an extension of the standard NMA methodology presented by (Lu and Ades 2004) and are run in JAGS (version 4.3.0 or later is required) (JAGS Computer Program 2017). For full details of time-course MBNMA methodology see Pedder et al. (2019), and a simulation study exploring the statistical properties of the method is reported in Pedder et al. (2020).

This package has been developed alongside MBNMAdose, a package that allows users to perform dose-response MBNMA to allow for modelling of dose-response relationships between different agents within a network. However, they should not be loaded into R at the same time as there are a number of functions with shared names that perform similar tasks yet are specific to dealing with either time-course or dose-response data.

Within the vignette, some models have not been evaluated, or have been run with fewer iterations than would be necessary to achieve convergence and produce valid results in practice. This has been done to speed up computation and rendering of the vignette.

### Workflow within the package

Functions within MBNMAtime follow a clear pattern of use:

1. Load your data into the correct format using mb.network()
2. Specify a suitable time-course function and analyse your data using mb.run()
3. Test for consistency using functions like mb.nodesplit()
4. Examine model results using forest plots and treatment rankings
5. Use your model to predict responses using predict()

At each of these stages there are a number of informative graphs that can be generated to help understand the data and make decisions regarding model fitting.

## Datasets Included in the Package

### Pain relief in osteoarthritis

osteopain is from a systematic review of treatments for pain in osteoarthritis, used previously in Pedder et al. (2019). The outcome is pain measured on a continuous scale, and aggregate data responses correspond to the mean WOMAC pain score at different follow-up times. The dataset includes 30 Randomised-Controlled Trials (RCTs), comparing 29 different treatments (including placebo). osteopain is a data frame in long format (one row per time point, arm and study), with the variables studyID, time, y, se, treatment and arm.

studyID time y se treatment arm treatname
Baerwald 2010 0 6.55 0.09 Pl_0 1 Placebo_0
Baerwald 2010 2 5.40 0.09 Pl_0 1 Placebo_0
Baerwald 2010 6 4.97 0.10 Pl_0 1 Placebo_0
Baerwald 2010 13 4.75 0.11 Pl_0 1 Placebo_0
Baerwald 2010 0 6.40 0.13 Na_1000 2 Naproxen_1000
Baerwald 2010 2 4.03 0.13 Na_1000 2 Naproxen_1000

### Alogliptin for lowering blood glucose concentration in type II diabetes

alog_pcfb is from a systematic review of Randomised-Controlled Trials (RCTs) comparing different doses of alogliptin with placebo (Langford et al. 2016). The systematic review was simply performed and was intended to provide data to illustrate a statistical methodology rather than for clinical inference. Alogliptin is a treatment aimed at reducing blood glucose concentration in type II diabetes. The outcome is continuous, and aggregate data responses correspond to the mean change in HbA1c from baseline to follow-up in studies of at least 12 weeks follow-up. The dataset includes 14 Randomised-Controlled Trials (RCTs), comparing 5 different doses of alogliptin with placebo (6 different treatments in total). alog_pcfb is a data frame in long format (one row per time point, arm and study), with the variables studyID, clinicaltrialGov_ID, agent, dose, treatment, time, y, se, and N.

studyID clinicaltrialGov_ID agent dose treatment time y se N
1 NCT01263470 alogliptin 0.00 placebo 2 0.00 0.02 75
1 NCT01263470 alogliptin 6.25 alog_6.25 2 -0.16 0.02 79
1 NCT01263470 alogliptin 12.50 alog_12.5 2 -0.17 0.02 84
1 NCT01263470 alogliptin 25.00 alog_25 2 -0.16 0.02 79
1 NCT01263470 alogliptin 50.00 alog_50 2 -0.15 0.02 79
1 NCT01263470 alogliptin 0.00 placebo 4 -0.01 0.04 75

### Tiotropium, Aclidinium and Placebo for maintenance treatment of moderate to severe chronic obstructive pulmonary disease

A dataset from a systematic review of Randomised-Controlled Trials (RCTs) for maintenance treatment of moderate to severe chronic obstructive pulmonary disease (COPD) (Karabis et al. 2013). Data are extracted from (Tallarita, De lorio, and Baio 2019). SEs were imputed for three studies, and number of patients randomised were imputed for one study (LAS 39) in which they were missing, using the median standard deviation calculated from other studies in the dataset. The outcome is trough Forced Expiratory Volume in 1 second (FEV1), measured in litres and reported in each study arm as mean change from baseline to follow-up. The dataset includes 13 RCTs, comparing 2 treatments (Tiotropium and Aclidinium) and placebo. copd is a data frame in long format (one row per observation, arm and study), with the variables studyID, time, y, se, treatment, and n.

studyID time y se treatment n
ACCORD I 1 -0.01 0.01 Placebo 187
ACCORD I 4 -0.01 0.01 Placebo 187
ACCORD I 8 -0.01 0.01 Placebo 187
ACCORD I 12 -0.02 0.01 Placebo 187
ACCORD I 1 0.10 0.01 Aclidinium 190
ACCORD I 4 0.11 0.01 Aclidinium 190

### Body weight reduction in obesity patients

obesityBW_CFB is from a systematic review of pharmacological treatments for obesity. The outcome measured is change from baseline in body weight (kg) at different follow-up times. 35 RCTs are included that investigate 26 different treatments (16 agents/agent combinations compared at different doses). obesityBW_CFB is a dataset in long format (one row per time point, arm and study), with the variables studyID, time, y, se, N, treatment, arm, treatname, agent and class.

class is the class of a particular agent (e.g. Lipase inhibitor)

studyID time y se N treatment treatname agent class
27 Apfelbaum 1999 4.35 -1.00 0.39 78 plac placebo placebo Placebo
28 Apfelbaum 1999 4.35 -1.59 0.38 81 sibu_10MG sibutramine 10MG sibutramine SNRI
29 Apfelbaum 1999 8.70 -1.59 0.40 78 plac placebo placebo Placebo
30 Apfelbaum 1999 8.70 -3.01 0.39 81 sibu_10MG sibutramine 10MG sibutramine SNRI
31 Apfelbaum 1999 13.04 -2.25 0.41 78 plac placebo placebo Placebo
32 Apfelbaum 1999 13.04 -4.76 0.40 81 sibu_10MG sibutramine 10MG sibutramine SNRI

### Serum uric acid concentration in gout

goutSUA_CFB is from a systematic review of interventions for lowering Serum Uric Acid (SUA) concentration in patients with gout [not published previously]. The outcome is continuous, and aggregate data responses correspond to the mean change from baseline in SUA in mg/dL at different follow-up times. The dataset includes 28 RCTs, comparing 41 treatments (8 agents compared at different doses). goutSUA_CFB is a data frame in long format (one row per arm and study), with the variables studyID, time, y, se, treatment, arm, class and treatname.

studyID time y se treatment treatname class
1102 1 0.07 0.25 RDEA_100 RDEA594_100 RDEA
1102 1 0.02 0.18 RDEA_200 RDEA594_200 RDEA
1102 1 0.06 0.25 RDEA_400 RDEA594_400 RDEA
1102 2 -0.53 0.25 RDEA_100 RDEA594_100 RDEA
1102 2 -1.37 0.18 RDEA_200 RDEA594_200 RDEA
1102 2 -1.73 0.25 RDEA_400 RDEA594_400 RDEA

## Inspecting the data

Before embarking on an analysis, the first step is to have a look at the raw data. Two features (network connectivity and time-course relationship) are particularly important for MBNMA. To investigate these we must first get our dataset into the right format for the package. We can do this using mb.network(). This requires specifying the desired treatment to use for the network reference treatment, though one will automatically be specified if not given.

# Using the pain dataset
network.pain <- mb.network(osteopain, reference = "Pl_0")
#> Studies reporting change from baseline automatically identified from the data
print(network.pain)
#> description :
#> [1] "Network"
#>
#> data.ab :
#> # A tibble: 417 x 10
#> # Groups:   studyID, time [116]
#>    studyID    time     y     se treatment   arm treatname   fupcount  fups  narm
#>    <fct>     <dbl> <dbl>  <dbl>     <dbl> <int> <fct>          <int> <int> <int>
#>  1 Baerwald~     0  6.55 0.0861         1     1 Placebo_0          1     4     3
#>  2 Baerwald~     0  6.40 0.127         15     2 Naproxen_1~        1     4     3
#>  3 Baerwald~     0  6.62 0.0900        16     3 Naproxcino~        1     4     3
#>  4 Baerwald~     2  5.40 0.0932         1     1 Placebo_0          2     4     3
#>  5 Baerwald~     2  4.03 0.133         15     2 Naproxen_1~        2     4     3
#>  6 Baerwald~     2  4.43 0.0926        16     3 Naproxcino~        2     4     3
#>  7 Baerwald~     6  4.97 0.0997         1     1 Placebo_0          3     4     3
#>  8 Baerwald~     6  3.72 0.139         15     2 Naproxen_1~        3     4     3
#>  9 Baerwald~     6  4.08 0.0965        16     3 Naproxcino~        3     4     3
#> 10 Baerwald~    13  4.75 0.109          1     1 Placebo_0          4     4     3
#> # ... with 407 more rows
#>
#> studyID :
#>  [1] "Baerwald 2010"     "Bensen 1999"       "Bingham 2007a"
#>  [4] "Bingham 2007b"     "Birbara 2006_1"    "Birbara 2006_2"
#>  [7] "Chappell 2009"     "Chappell 2011"     "Clegg 2006"
#> [10] "DeLemos 2011"      "Enrich 1999"       "Fishman 2007"
#> [13] "Fleischmann 2005"  "Gana 2006"         "Gottesdiener 2002"
#> [16] "Kivitz 2001"       "Kivitz 2002"       "Lehmann 2005"
#> [19] "Leung 2002"        "Markenson 2005"    "McKenna 2001"
#> [22] "Puopolo 2007"      "Sawitzke 2010"     "Schnitzer 2005_2"
#> [25] "Schnitzer 2010"    "Schnitzer 2011LUM" "Sheldon 2005"
#> [28] "Sowers 2005"       "Tannenbaum 2004"   "Williams 2001"
#>
#> cfb :
#>  [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [25] FALSE FALSE FALSE FALSE FALSE FALSE
#>
#> treatments :
#>  [1] "Pl_0"    "Ce_100"  "Ce_200"  "Ce_400"  "Du_90"   "Et_10"   "Et_30"
#>  [8] "Et_5"    "Et_60"   "Et_90"   "Lu_100"  "Lu_200"  "Lu_400"  "Lu_NA"
#> [15] "Na_1000" "Na_1500" "Na_250"  "Na_750"  "Ox_44"   "Ro_12"   "Ro_125"
#> [22] "Ro_25"   "Tr_100"  "Tr_200"  "Tr_300"  "Tr_400"  "Va_10"   "Va_20"
#> [29] "Va_5"

This takes a dataset with the columns:

• studyID Study identifiers
• time Numeric data indicating follow-up times
• y Numeric data indicating the mean response for a given observation
• se Numeric data indicating the standard error for a given observation
• treatment Treatment identifiers (can be numeric, factor or character)
• class An optional column indicating a particular class that may be shared by several treatments.
• N An optional column indicating the number of participants used to calculate the response at a given observation.

Additional columns can be included in the dataset. These will simply be added to the mb.network object, though will not affect the classification of the data.

mb.network then performs the following checks on the data:

• The dataset has the required column names
• There are no missing values
• All standard errors (SE) are positive
• Observations are made at the same time points in all arms of a study (i.e. the data are balanced)
• Class labels are consistent within each treatment
• Studies have at least two arms

Unless otherwise specified, mb.network() will automatically determine whether each study in the dataset is reported as change from baseline or absolute - studies that include a measurement at time=0 are assumed to report absolute values, whilst those with no measurement at time=0 are assumed to be change from baseline. This can also be explicitly specified by the user by including a logical vector for the argument cfb in mb.network() - TRUE indicates a study reports change from baseline, and FALSE indicates that it reports absolute values.

Finally, mb.network() converts the data into an object of class("mb.network"), which contains indices for study arms and follow-up measurements, and generates numeric values for treatments and classes. By convention, treatments are numbered alphabetically, though if the original data for treatments is provided as a factor then the factor codes will be used. This then contains all the necessary information for subsequent MBNMAtime functions.

### Network connectivity

Looking at how the evidence in the network is connected and identifying which studies compare which treatments helps to understand which treatment effects can be estimated and what information will be helping to inform those estimates. A network plot can be generated which shows which treatments have been compared in head-to-head trials. Typically the thickness of connecting lines (“edges”) is proportional to the number of studies that make a particular comparison and the size of treatment nodes (“vertices”) is proportional to the total number of patients in the network who were randomised to a given treatment (provided N is included as a variable in the original dataset for mb.network()).

In MBNMAtime these plots are generated using igraph, and can be plotted by calling plot(). The generated plots are objects of class("igraph") meaning that, in addition to the options specified in plot(), various igraph functions can be used to make more detailed edits to them.

# Prepare data using the alogliptin dataset
network.alog <- mb.network(alog_pcfb, reference = "placebo")
#> Studies reporting change from baseline automatically identified from the data

# Plot network
plot(network.alog)

Within these network plots, treatments are automatically aligned in a circle (as the default) and can be tidied by shifting the label distance away from the nodes.

# Draw network plot in a star layout using the gout dataset
network.gout <- mb.network(goutSUA_CFB, reference = "Plac")
plot(network.gout, layout=igraph::as_star(), label.distance = 5)
#> Studies reporting change from baseline automatically identified from the data

This command returns a warning stating that some treatments are not connected to the network reference treatment through any pathway of head-to-head evidence. The nodes that are coloured white represent these treatments. This means that it will not be possible to estimate relative effects for these treatments versus the network reference treatment (or any treatments connected to it). Several options exist to allow for inclusion of these treatments in an analysis which we will discuss in more detail later, but one approach is to assume a shared effect among treatments within the same class/agent. We can generate a network plot at the class level to examine this more closely, and can see that the network is connected at the class level.

plot(network.gout, level = "class", remove.loops = TRUE, label.distance = 5)

It is also possible to plot a network at the treatment level but to colour the treatments by the class that they belong to.

plot(network.gout, level = "treatment", v.color = "class", label.distance = 5)

### Examining the time-course relationship

In order to consider which functional forms may be appropriate for modelling the time-course relationship, it is important to look at the responses in each arm plotted over time. This can easily be done using the timeplot() function on an object of class("mb.network")

# Prepare data using the pain dataset
network.pain <- mb.network(osteopain, reference="Pl_0")
#> Studies reporting change from baseline automatically identified from the data

# Draw plot of raw study responses over time
timeplot(network.pain)

As the mean response for all treatments shows a rapid reduction in pain score followed by a levelling out after 2-5 weeks, an exponential decay time-course function might be a reasonable fit for this dataset. More complex alternatives could be Emax models (with or without a Hill parameter), fractional polynomials or a spline function.

Responses can also be plotted grouped by class rather than by treatment, and the relative effects between each treatment/class can be plotted instead of the absolute treatment responses. Since the MBNMA framework models the time-course on relative effects (Pedder et al. 2019) this can in fact make interpretation of the plots easier with regards to identifying a best-fitting time-course function.

# Draw plot of within-study relative effects over time grouped by class
network.gout <- mb.network(goutSUA_CFBcomb)
timeplot(network.gout, level="class", plotby="rel")

Many of the profiles here appear to be quite different within the same class, which would suggest modelling class effects may be inappropriate for this dataset.

## Analysis using mb.run()

MBNMA models are fitted using mb.run(). This can just as easily be performed on datasets with many different treatments (network meta-analysis) as it can on datasets comparing only two treatments (pairwise meta-analysis) - the syntax is the same.

An object or class("mb.network") must be provided as the data for mb.run(). The key arguments within mb.run() involve specifying the functional form used to model the time-course, and the time-course parameters that comprise that functional form.

#### Time-course functions

A number of different time-course functions can be fitted within MBNMAtime and the specific forms of the time-course parameters are defined by arguments within these functions, and this allows for a wide variety of parameterizations and time-course shapes. For further details check the help files for each function (e.g. ?tloglin()). These functions, are then used as inputs for the fun argument in mb.run().

• tloglin() - Log-linear function
• texp() - Exponential function
• temax() - Emax function
• tpoly() - Polynomial function (e.g. linear, quadratic)
• tfpoly() - Fractional polynomial function, as proposed previously for time-course NMA by Jansen (2015).
• tspline() - Spline functions (includes B-splines, restricted cubic splines, natural splines and piecewise linear splines)
• tuser() - A time-course function that can be explicitly defined by the user

Time-course parameters within time-course functions are each defined by two arguments:

pool is used to define the approach used for the pooling of a given time-course parameter and can either of:

• "rel" indicates that relative effects (or mean differences) should be pooled for this time-course parameter. This preserves randomisation within included studies, are likely to vary less between studies (only due to effect modification), and allow for testing of consistency between direct and indirect evidence. Pooling follows the general approach for Network Meta-Analysis proposed by Lu and Ades (2004).
• "abs" indicates that study arms should be pooled across the whole network for this time-course parameter independently of assigned treatment. This implies using a single absolute value across the network for this time-course parameter, and may therefore be making strong assumptions of similarity.

method is used to define the model used for meta-analysis for a given time-course parameter and can take either of:

• "common" implies that all studies estimate the same true effect (sometimes called a “fixed effect” meta-analysis)
• "random" implies that all studies estimate a separate true effect, but that each of these true effects vary randomly around a true mean effect. This approach allows for modelling of between-study heterogeneity.

Specifying pooling relative effects on all time-course parameters would imply performing a contrast-based synthesis, whereas specifying pooling absolute effects on all of them would imply performing an arm-based synthesis. There has been substantial discussion in the literature regarding the strengths and limitations of both these approaches (Dias and Ades 2016; Hong et al. 2016; Karahalios et al. 2017).

Additional arguments within the function may also be used to specify the degree (e.g. for polynomials) or the number of knots or knot placement for splines.

### Output

mb.run() returns an object of class(c("mbnma", "rjags")). summary() provides summary estimates of posterior densities for different parameters in the model, with some explanation regarding the way in which the model has been defined. Estimates are automatically reported for parameters of interest depending on the model specification (unless otherwise specified in parameters.to.save). Nodes that are automatically monitored (if present in the model) have the following interpretation:

#### Parameters modelled using relative effects

If pooling is relative (e.g. pool.1="rel") for a given parameter then the named parameter (e.g. emax) or a numbered d parameter (e.g. d.1) corresponds to the pooled relative effect (or mean difference) for a given treatment compared to the network reference treatment for this time-course parameter.

sd. followed by a named (e.g. emax, beta.1) is the between-study SD (heterogeneity) for relative effects, reported if pooling for a time-course parameter is relative (e.g. pool.1="rel") and the method for synthesis is random (e.g. method.1="random).

If class effects are modelled, parameters for classes are represented by the upper case name of the time-course parameter they correspond to. For example if class.effect=list(emax="random"), relative class effects will be represented by EMAX. The SD of the class effect (e.g. sd.EMAX, sd.BETA.1) is the SD of treatments within a class for the time-course parameter they correspond to.

#### Parameters modelled using absolute effects

If pooling is absolute (e.g. pool.1="abs") for a given parameter then the named parameter (e.g. emax) or a numbered beta parameter (e.g. beta.1) corresponds to the estimated absolute effect for this time-course parameter.

For an absolute time-course parameter if the corresponding method is common (e.g. method.1="common") the parameter corresponds to a single common parameter estimated across all studies and treatments. If the corresponding method is random (e.g. method.1="random") then parameter is a mean effect around which the study-level absolute effects vary with SD corresponding to sd. followed by the named parameter (e.g. sd.emax, sd.beta.1).

#### Other model parameters

rho is the correlation coefficient for correlation between time-points. Its interpretation will differ depending on the covariance structure specified in covar.

totresdev is residual deviance of the model and deviance is the deviance of the model. Model fit statistics for pD (effective number of parameters) and DIC (Deviance Information Criterion) are also reported, with an explanation as to how they have been calculated.

#### Examples

An example MBNMA of the alogliptin dataset using a linear time-course function and common treatment effects that pool relative effects and assumes consistency between direct and indirect evidence can be performed as follows:

# Run a linear time-course MBNMA
mbnma <- mb.run(network.alog, fun=tpoly(degree=1, pool.1="rel", method.1="common"))
#> module glm loaded
summary(mbnma)
#> ========================================
#> Time-course MBNMA
#> ========================================
#>
#> Time-course function: poly (degree = 1)
#> Data modelled without intercept (change from baseline data assumed)
#>
#> beta.1 parameter
#> Pooling: relative effects
#> Method: common treatment effects
#>
#> |Treatment |Parameter |  Median|    2.5%|   97.5%|
#> |:---------|:---------|-------:|-------:|-------:|
#> |placebo   |d.1[1]    |  0.0000|  0.0000|  0.0000|
#> |alog_6.25 |d.1[2]    | -0.0346| -0.0375| -0.0319|
#> |alog_12.5 |d.1[3]    | -0.0422| -0.0440| -0.0405|
#> |alog_25   |d.1[4]    | -0.0449| -0.0467| -0.0432|
#> |alog_50   |d.1[5]    | -0.0511| -0.0539| -0.0483|
#> |alog_100  |d.1[6]    | -0.0485| -0.0667| -0.0302|
#>
#>
#>
#> Correlation between time points
#> Rho assigned a numeric value: 0
#>
#> #### Model Fit Statistics ####
#>
#> Effective number of parameters:
#> pD (pV) calculated using the rule, pD = var(deviance)/2 = 19
#> Deviance = 4502
#> Residual deviance = 5449
#> Deviance Information Criterion (DIC) = 4521

For this model, the d.1 parameters correspond to the 1st polynomial coefficient, and therefore are the linear gradient of the response over time for each treatment versus placebo - i.e. the mean difference for the change in efficacy for each treatment versus placebo. However, note that the residual deviance of the model is very high, suggesting (as we might expect) that this linear time-course function is a poor fit.

We may want to fit a more complex time-course function with two time-course parameters, such as an Emax function, yet limitations in the data might require that we make an assumption that one of the parameters does not vary by treatment. We can specify this by setting pool to be equal to "abs" for any parameters we choose.

# Run an Emax time-course MBNMA with two parameters
mbnma <- mb.run(network.alog, fun=temax(
pool.emax = "rel", method.emax="common",
pool.et50 = "abs", method.et50="common"
))
#> 'et50' parameters are on exponential scale to ensure they take positive values on the natural scale
summary(mbnma)
#> ========================================
#> Time-course MBNMA
#> ========================================
#>
#> Time-course function: emax
#> Data modelled without intercept (change from baseline data assumed)
#>
#> emax parameter
#> Pooling: relative effects
#> Method: common treatment effects
#>
#> |Treatment |Parameter |  Median|    2.5%|   97.5%|
#> |:---------|:---------|-------:|-------:|-------:|
#> |placebo   |emax[1]   |  0.0000|  0.0000|  0.0000|
#> |alog_6.25 |emax[2]   | -0.5886| -0.6546| -0.5227|
#> |alog_12.5 |emax[3]   | -0.7762| -0.8182| -0.7346|
#> |alog_25   |emax[4]   | -0.8452| -0.8892| -0.8030|
#> |alog_50   |emax[5]   | -0.9658| -1.0382| -0.9007|
#> |alog_100  |emax[6]   | -0.8291| -1.1018| -0.5641|
#>
#>
#> et50 parameter
#> Pooling: absolute effects
#> Method: common treatment effects
#> Parameter modelled on exponential scale to ensure it takes positive values on the natural scale
#>
#> |Treatment |Parameter | Median|   2.5%|  97.5%|
#> |:---------|:---------|------:|------:|------:|
#> |placebo   |et50      | 1.6466| 1.5659| 1.7269|
#> |alog_6.25 |et50      | 1.6466| 1.5659| 1.7269|
#> |alog_12.5 |et50      | 1.6466| 1.5659| 1.7269|
#> |alog_25   |et50      | 1.6466| 1.5659| 1.7269|
#> |alog_50   |et50      | 1.6466| 1.5659| 1.7269|
#> |alog_100  |et50      | 1.6466| 1.5659| 1.7269|
#>
#>
#>
#> Correlation between time points
#> Rho assigned a numeric value: 0
#>
#> #### Model Fit Statistics ####
#>
#> Effective number of parameters:
#> pD (pV) calculated using the rule, pD = var(deviance)/2 = 19
#> Deviance = 88
#> Residual deviance = 1035
#> Deviance Information Criterion (DIC) = 107

In this case, the parameters are named following the Emax function specification. emax corresponds to the maximum effect for each treatment versus placebo (interpretable as a mean difference versus placebo), whereas et50 is the log of the time at which 50% of the maximum response is achieved, across all treatments in the network. This assumes conditional constancy of absolute effects for this time-course parameter, which is typically a strong assumption. However, if there were limited data with which to inform this parameter (e.g. at earlier time-points) then such an assumption might be necessary, with the caveat that interpolation of response at time-points informed by this parameter may be more susceptible to bias. Further exploration of the degree of data required for reliable estimation of time-course parameters is given in Pedder et al. (2020).

### Additional model specification with mb.run()

#### Correlation between time points

Within-study correlation between time points can easily be modelled using mb.run(), though this requires some additional considerations. The simplest approach is to incorporate correlation by using a variance adjustment (Jansen, Vieira, and Cope 2015). This avoids the need to use a multivariate normal likelihood (which is slow to run), and it assumes a common correlation between neighbouring time-points. This is achieved by using the argument covar="varadj", which is the default in mb.run().

There are two alternative covariance structures can be modelled, though these require fitting a multivariate normal likelihood and therefore take longer to run. covar="CS" specifies fitting a Compound Symmetry covariance structure, whilst covar="AR1" specifies fitting an autoregressive AR1 covariance structure to the multivariate normal likelihood used for modelling the correlation between multiple time points within a study (Kincaid 2005).

However, in addition to this, it’s also necessary to specify a value for rho, and this can be assigned in one of two ways:

• Given as string representing a JAGS prior distribution (Plummer 2017), which indicates that the correlation should be estimated from the data. For example, to specify a prior that the correlation between time-points will be between 0 and 1 with equal probability you could set rho="dunif(0,1)".
• Given as a single numeric value, which indicates that the correlation should be fixed to that value. For example, this value could be estimated externally from another study using Individual Participant Data. This could also be used to run a deterministic sensitivity analysis using different fixed values of rho.
# Using the COPD dataset
network.copd <- mb.network(copd)

# Run an log-linear time-course MBNMA
# that accounts for correlation between time points using variance adjustment
mbnma <- mb.run(network.copd,
fun=tloglin(pool.rate="rel", method.rate="random"),
rho="dunif(0,1)", covar="varadj")

It is important to note that the covariance matrix must be positive semi-definite. This may mean that in order to satisfy this requirement for particular covariance matrix structures, the values that rho can take are limited. rho must always be bounded by -1 and 1, but even within this range some negative values for rho can result in a non positive matrix, which can lead to an error in the evaluation of the multivariate likelihood. If so, it may be necessary to further restrict the prior distribution.

### Class effects

Shared effects between treatments within the network can be modelled using class effects. This requires assuming that different treatments have some sort of shared class effect, perhaps due to different (yet clinically similar) doses of the same agent or different treatments with a similar mechanism of action. One advantage of this is that class effects can be used to connect relative effects between treatments in a network that would be disconnected at the treatment level, but can be connected via classes at the class level. However, it is important to ensure that such an effect is clinically justifiable, as making these assumptions risks introducing heterogeneity/inconsistency.

Class effects can only be applied to time-course parameters which vary by treatment (pool="rel"), and class effects are modelled separately for each time-course parameter.

In mb.run() class effects are specified as a list, in which each element is named by the time-course parameter on which it should be modelled. The class effect for each time-course parameter can be either "common", in which the effects for each treatment within the same class are constrained to a common class effect, or "random", in which the effects for each treatment within the same class are assumed to be randomly distributed around a shared class mean.

# Run a B-spline time-course MBNMA with a knot at 0.2 times the max follow-up
# Common class effect on beta.2, the 2nd spline coefficient
mbnma <- mb.run(network.gout,
fun=tspline(type="bs", knots=c(0.2),
pool.1 = "rel", method.1="common",
pool.2="rel", method.2="random"),
class.effect = list(beta.2="common"))
summary(mbnma)
#> ========================================
#> Time-course MBNMA
#> ========================================
#>
#> Time-course function: B-spline (knots = 0.2; degree = 1)
#> Data modelled without intercept (change from baseline data assumed)
#>
#> beta.1 parameter
#> Pooling: relative effects
#> Method: common treatment effects
#>
#> |Treatment |Parameter |   Median|     2.5%|    97.5%|
#> |:---------|:---------|--------:|--------:|--------:|
#> |Plac      |d.1[1]    |   0.0000|   0.0000|   0.0000|
#> |Allo_100  |d.1[2]    |  -1.4974|  -2.9620|  -0.0938|
#> |Allo_200  |d.1[3]    |  -3.0017|  -3.2421|  -2.7605|
#> |Allo_289  |d.1[4]    |  -4.9082|  -5.0834|  -4.7349|
#> |Allo_400  |d.1[5]    | -12.7188| -14.2657| -11.1183|
#> |Arha_NA   |d.1[6]    |  -6.8347|  -7.5490|  -6.1277|
#> |BCX4_140  |d.1[7]    |  -4.5726|  -4.9409|  -4.1892|
#> |BCX4_18.5 |d.1[8]    |  -2.4309|  -2.8592|  -1.9908|
#> |BCX4_240  |d.1[9]    |  -5.8823|  -6.3385|  -5.4170|
#> |BCX4_80   |d.1[10]   |  -3.5876|  -4.0531|  -3.1312|
#> |Benz_NA   |d.1[11]   | -15.5031| -16.7099| -14.3277|
#> |Febu_140  |d.1[12]   |  -7.2801|  -7.4586|  -7.1037|
#> |Febu_210  |d.1[13]   |  -8.8125|  -8.9131|  -8.7141|
#> |Febu_25   |d.1[14]   |  -3.8327|  -4.0047|  -3.6680|
#> |Febu_72.5 |d.1[15]   |  -6.1662|  -6.3445|  -5.9914|
#> |RDEA_100  |d.1[16]   |  -2.3702|  -2.9105|  -1.8250|
#> |RDEA_200  |d.1[17]   |  -4.1130|  -4.4637|  -3.7593|
#> |RDEA_400  |d.1[18]   |  -5.2322|  -5.5773|  -4.8796|
#> |RDEA_600  |d.1[19]   |  -8.0723|  -8.5128|  -7.6459|
#>
#>
#> beta.2 parameter
#> Pooling: relative effects
#> Method: random treatment effects
#> Class effects modelled for this parameter
#>
#> |Treatment |Parameter |  Median|     2.5%|   97.5%|
#> |:---------|:---------|-------:|--------:|-------:|
#> |Plac      |d.2[1]    |  0.0000|   0.0000|  0.0000|
#> |Allo_100  |d.2[2]    | 17.4979|   3.8131| 30.4092|
#> |Allo_200  |d.2[3]    | 17.4979|   3.8131| 30.4092|
#> |Allo_289  |d.2[4]    | 17.4979|   3.8131| 30.4092|
#> |Allo_400  |d.2[5]    | 17.4979|   3.8131| 30.4092|
#> |Arha_NA   |d.2[6]    |  0.5423| -65.4203| 62.5671|
#> |BCX4_140  |d.2[7]    | 10.1274|  -2.1483| 22.3384|
#> |BCX4_18.5 |d.2[8]    | 10.1274|  -2.1483| 22.3384|
#> |BCX4_240  |d.2[9]    | 10.1274|  -2.1483| 22.3384|
#> |BCX4_80   |d.2[10]   | 10.1274|  -2.1483| 22.3384|
#> |Benz_NA   |d.2[11]   | 18.5211|  -6.3850| 42.3361|
#> |Febu_140  |d.2[12]   | 16.1413|   5.2284| 26.6198|
#> |Febu_210  |d.2[13]   | 16.1413|   5.2284| 26.6198|
#> |Febu_25   |d.2[14]   | 16.1413|   5.2284| 26.6198|
#> |Febu_72.5 |d.2[15]   | 16.1413|   5.2284| 26.6198|
#> |RDEA_100  |d.2[16]   | 20.5142|   3.6504| 36.9838|
#> |RDEA_200  |d.2[17]   | 20.5142|   3.6504| 36.9838|
#> |RDEA_400  |d.2[18]   | 20.5142|   3.6504| 36.9838|
#> |RDEA_600  |d.2[19]   | 20.5142|   3.6504| 36.9838|
#>
#> Between-study SD modelled for this parameter:
#>
#> |Parameter |  Median|   2.5%|   97.5%|
#> |:---------|-------:|------:|-------:|
#> |sd.beta.2 | 11.7419| 9.1363| 15.3765|
#>
#>
#> Class Effects
#>
#> Class effects for beta.2
#> Common (fixed) class effects
#>
#> |Class |Parameter |  Median|     2.5%|   97.5%|
#> |:-----|:---------|-------:|--------:|-------:|
#> |Plac  |D.2[1]    |  0.0000|   0.0000|  0.0000|
#> |Allo  |D.2[2]    | 17.4979|   3.8131| 30.4092|
#> |Arha  |D.2[3]    |  0.5423| -65.4203| 62.5671|
#> |BCX4  |D.2[4]    | 10.1274|  -2.1483| 22.3384|
#> |Benz  |D.2[5]    | 18.5211|  -6.3850| 42.3361|
#> |Febu  |D.2[6]    | 16.1413|   5.2284| 26.6198|
#> |RDEA  |D.2[7]    | 20.5142|   3.6504| 36.9838|
#>
#>
#> Correlation between time points
#> Rho assigned a numeric value: 0
#>
#> #### Model Fit Statistics ####
#>
#> Effective number of parameters:
#> pD (pV) calculated using the rule, pD = var(deviance)/2 = 85
#> Deviance = 149881
#> Residual deviance = 150576
#> Deviance Information Criterion (DIC) = 149966

Mean class effects are given in the output as D.2 parameters. These can be interpreted as the relative effect of each class versus the Plac (Placebo), for the 2nd spline coefficient (beta.2). Note the number of D.2 parameters is therefore equal to the number of classes defined in the dataset.

Several additional arguments can be given to mb.run() that require further explanation.

#### Priors

Default vague priors for the model are as follows:

\begin{aligned} &\alpha_{i} \sim N(0,10000)\\ &\boldsymbol{\mu}_{i} \sim MVN(0,M_{i})\\ &\boldsymbol{d}_{t} \sim MVN(0,\Sigma_{t})\\ &beta_{\phi} \sim N(0,10000)\\ &D_{\phi,c} \sim N(0,1000)\\ &\tau_{\phi} \sim N(0,400) \text{ limited to } x \in [0,\infty]\\ &\tau^D_{\phi} \sim N(0,400) \text{ limited to } x \in [0,\infty]\\ &M_{i} \sim Wish^{-1}(\Omega,k)\\ &\Sigma_{t} \sim Wish^{-1}(\Omega,k)\\ \end{aligned}

• $$\alpha_i$$ is the response at time=0 in study $$i$$
• $$\mu_i$$ is a vector of study reference effects for each time-course parameter in study $$i$$. Where only a single time-course parameter is modelled using relative effects the prior is defined as $$\mu_{i} \sim N(0,10000)$$.
• $$\boldsymbol{d}_{t}$$ is a vector of pooled relative effects for treatment $$t$$ whose length is the number of time-course parameters in the model. Where only a single time-course parameter is modelled using relative effects the prior is defined as $$d_{t} \sim N(0,10000)$$.
• $$\beta_{\phi}$$ is the absolute effect for time-course parameter $$\phi$$ modelled independently of treatment
• $$D_{\phi,c}$$ is the class relative effect for time-course parameter $$\phi$$ in class $$c$$
• $$\tau_{\phi}$$ is the between-study SD for time-course parameter $$\phi$$
• $$\tau^D_{\phi}$$ is the within-class SD for time-course parameter $$\phi$$
• $$\Omega$$ is a diagonal scale matrix specified in var.scale
• $$k$$ is the degrees of freedom for the inverse-Wishart distribution, equal to the number of time-course parameters

Users may wish to change these, perhaps in order to use more/less informative priors, but also because the default prior distributions in some models may lead to errors when compiling/updating models.

This can be more likely for certain types of models. For example some prior distributions may generate results that are too extreme for JAGS to compute, such as for time-course parameters that are powers (e.g. Emax functions with a Hill parameter or power parameters in fractional polynomials).

If the model fails during compilation/updating (i.e. due to a problem in JAGS), mb.run() will generate an error and return a list of arguments that mb.run() used to generate the model. Within this (as within a model that has run successfully), the priors used by the model (in JAGS syntax) are stored within "model.arg".

In this way a model can first be run with vague priors and then rerun with different priors, perhaps to allow successful computation, perhaps to provide more informative priors, or perhaps to run a sensitivity analysis with different priors.

To change priors within a model, a list of replacements can be provided to priors in mb.run(). The name of each element is the name of the parameter to change (without indices) and the value of the element is the JAGS distribution to use for the prior. See the JAGS Manual (2017) for syntax details regarding specifying distributions. This can include censoring or truncation if desired. Only the priors to be changed need to be specified - priors for parameters that aren’t specified will take default values. Note that in JAGS, normal distributions are specified using precision (1/variance) rather than SD.

For example, we may wish to specify a tighter prior for the between-study SD:

mbnma <- mb.run(network.copd,
fun=tloglin(pool.rate="rel", method.rate="random"),
priors=list(rate="dnorm(0,2) T(0,)"))

#### pD (effective number of parameters)

The default value for pd in mb.run() is pd="pv", which uses the rapid approach automatically calculated in the R2jags package as pv = var(deviance)/2. Whilst this is easy to calculate, it is only an approximation to the effective number of parameters, and may be numerically unstable (Gelman et al. 2003). However, it has been shown to be reliable for model comparison in time-course MBNMA models in a simulation study (Pedder et al. 2020).

A more reliable method for estimating pd is pd="pd.kl", which uses the Kullback-Leibler divergence (Plummer 2008). This is more reliable than the default method used in R2jags for calculating the effective number of parameters from non-linear models. The disadvantage of this approach is that it requires running additional MCMC iterations, so can be slightly slower to calculate.

A commonly-used approach in Bayesian models for calculating pD is the plug-in method (pd="plugin") (Spiegelhalter et al. 2002). However, this can sometimes result in negative non-sensical values due to skewed posterior distributions for deviance contributions that can arise when fitting non-linear models.

Finally, pD can also be calculated using an optimism adjustment (pd="popt") which allows for calculation of the penalized expected deviance (Plummer 2008). This adjustment allows for the fact that data used to estimate the model is the same as that used to assess its parsimony. As for pd="pd.kl", it also requires running additional MCMC iterations.

#### Correlation between time-course parameters

mb.run() automatically models correlation between time-course parameters modelled using relative effects (though this can be prevented by setting corparam=FALSE). Time-course parameters are typically correlated and this allows information on each parameter to help inform the other(s). The correlation is modelled using a multivariate normal distribution with a vague inverse-Wishart prior on the covariance matrix $$\Sigma_t$$. This can be made more informative by indicating the scale matrix for the parameters that are modelled using relative effects, and by increasing the degrees of freedom of the inverse-Wishart prior $$\Sigma_t$$.

omega can be used as an argument in mb.run() to represent this scale matrix. If specified, it must be a symmetric positive definite matrix with dimensions equal to the number of time-course parameters modelled using relative effects (pool="rel"). If left as NULL (the default) a diagonal matrix with elements equal to 1 is used. The degrees of freedom can then be changed within the priors argument of mb.run() to make the prior more informative.

For example, with the osteoarthritis dataset we might expect that for a piecewise linear time-course function, the parameter values (in this model the relative different in gradient vs placebo) for the first coefficient might be 10 times larger than for the second coefficient:

mbnma <- mb.run(network.pain,
fun=tspline(type="ls", knots=1,
pool.1="rel", method.1="random",
pool.2="rel", method.2="common"),
omega=matrix(c(10,3,3,1), nrow=2))

#### Arguments to be sent to JAGS

In addition to the arguments specific to mb.run() it is also possible to use any arguments to be sent to R2jags::jags(). Most of these relate to improving the performance of MCMC simulations in JAGS. Some of the key arguments that may be of interest are:

• n.chains The number of Markov chains to run (default is 3)
• n.iter The total number of iterations per MCMC chain
• n.burnin The number of iterations that are discarded to ensure iterations are only saved once chains have converged
• n.thin The thinning rate which ensures that results are only saved for 1 in every n.thin iterations per chain. This can be increased to reduce autocorrelation in MCMC samples

## Model Convergence

MBNMAtime is run using JAGS, which performs Bayesian inference using Gibbs sampling (JAGS Computer Program 2017). This samples from the posterior distribution to obtain posterior densities for monitored parameters of interest. However, convergence of the MCMC algorithm on the posterior density is therefore critical for obtaining robust results.

Multiple MCMC chains should always be run (default in MBNMAtime is 3) as this allows for traces of each chain to be examined. Consistent overlap between the traces in different chains, as well as low autocorrelation in MCMC samples suggests that convergence is likely to have been successful.

A simple output for assessing convergence is the Gelman-Rubin $$\hat{R}$$ statistic, which is the ratio of between- and within-chain standard deviations (Gelman et al. 2003). Values of <1.05 are widely accepted as implying convergence for practical values, though some functions in MBNMAtime alerts users if $$\hat{R}>1.02$$.

Rhat values for each parameter are calculated automatically in R2jags and can be shown if print() is called on an object of class("mbnma"):

print(mbnma)
#> Inference for Bugs model at "C:\Users\hp17602\AppData\Local\Temp\RtmpCIk5Wb\file44987650402d", fit using jags,
#>  3 chains, each with 20000 iterations (first 10000 discarded), n.thin = 10
#>  n.sims = 3000 iterations saved
#>              mu.vect sd.vect       2.5%        25%        50%        75%
#> D.2[1]         0.000   0.000      0.000      0.000      0.000      0.000
#> D.2[2]        17.399   6.691      3.813     12.961     17.498     21.949
#> D.2[3]        -0.241  32.400    -65.420    -21.892      0.542     20.881
#> D.2[4]        10.146   6.256     -2.148      5.911     10.127     14.248
#> D.2[5]        18.439  12.537     -6.385     10.268     18.521     27.041
#> D.2[6]        16.156   5.437      5.228     12.627     16.141     19.690
#> D.2[7]        20.533   8.362      3.650     15.321     20.514     26.048
#> d.1[1]         0.000   0.000      0.000      0.000      0.000      0.000
#> d.1[2]        -1.495   0.726     -2.962     -1.977     -1.497     -1.008
#> d.1[3]        -2.999   0.126     -3.242     -3.083     -3.002     -2.913
#> d.1[4]        -4.909   0.089     -5.083     -4.967     -4.908     -4.849
#> d.1[5]       -12.710   0.804    -14.266    -13.224    -12.719    -12.179
#> d.1[6]        -6.836   0.365     -7.549     -7.087     -6.835     -6.585
#> d.1[7]        -4.575   0.191     -4.941     -4.711     -4.573     -4.448
#> d.1[8]        -2.431   0.225     -2.859     -2.581     -2.431     -2.279
#> d.1[9]        -5.885   0.236     -6.338     -6.049     -5.882     -5.727
#> d.1[10]       -3.590   0.234     -4.053     -3.748     -3.588     -3.435
#> d.1[11]      -15.503   0.606    -16.710    -15.907    -15.503    -15.087
#> d.1[12]       -7.281   0.091     -7.459     -7.340     -7.280     -7.219
#> d.1[13]       -8.813   0.051     -8.913     -8.846     -8.812     -8.779
#> d.1[14]       -3.834   0.086     -4.005     -3.891     -3.833     -3.775
#> d.1[15]       -6.168   0.090     -6.344     -6.227     -6.166     -6.108
#> d.1[16]       -2.368   0.284     -2.911     -2.562     -2.370     -2.177
#> d.1[17]       -4.112   0.178     -4.464     -4.233     -4.113     -3.988
#> d.1[18]       -5.233   0.175     -5.577     -5.353     -5.232     -5.116
#> d.1[19]       -8.074   0.223     -8.513     -8.225     -8.072     -7.922
#> d.2[1]         0.000   0.000      0.000      0.000      0.000      0.000
#> d.2[2]        17.399   6.691      3.813     12.961     17.498     21.949
#> d.2[3]        17.399   6.691      3.813     12.961     17.498     21.949
#> d.2[4]        17.399   6.691      3.813     12.961     17.498     21.949
#> d.2[5]        17.399   6.691      3.813     12.961     17.498     21.949
#> d.2[6]        -0.241  32.400    -65.420    -21.892      0.542     20.881
#> d.2[7]        10.146   6.256     -2.148      5.911     10.127     14.248
#> d.2[8]        10.146   6.256     -2.148      5.911     10.127     14.248
#> d.2[9]        10.146   6.256     -2.148      5.911     10.127     14.248
#> d.2[10]       10.146   6.256     -2.148      5.911     10.127     14.248
#> d.2[11]       18.439  12.537     -6.385     10.268     18.521     27.041
#> d.2[12]       16.156   5.437      5.228     12.627     16.141     19.690
#> d.2[13]       16.156   5.437      5.228     12.627     16.141     19.690
#> d.2[14]       16.156   5.437      5.228     12.627     16.141     19.690
#> d.2[15]       16.156   5.437      5.228     12.627     16.141     19.690
#> d.2[16]       20.533   8.362      3.650     15.321     20.514     26.048
#> d.2[17]       20.533   8.362      3.650     15.321     20.514     26.048
#> d.2[18]       20.533   8.362      3.650     15.321     20.514     26.048
#> d.2[19]       20.533   8.362      3.650     15.321     20.514     26.048
#> rho            0.000   0.000      0.000      0.000      0.000      0.000
#> sd.beta.2     11.881   1.597      9.136     10.745     11.742     12.868
#> totresdev 150576.191  13.043 150552.971 150567.018 150575.535 150584.524
#> deviance  149882.003  13.043 149858.783 149872.829 149881.347 149890.335
#>                97.5%  Rhat n.eff
#> D.2[1]         0.000 1.000     1
#> D.2[2]        30.409 1.001  3000
#> D.2[3]        62.567 1.002  1200
#> D.2[4]        22.338 1.001  3000
#> D.2[5]        42.336 1.002  1400
#> D.2[6]        26.620 1.001  2400
#> D.2[7]        36.984 1.003   860
#> d.1[1]         0.000 1.000     1
#> d.1[2]        -0.094 1.001  2100
#> d.1[3]        -2.761 1.002  1300
#> d.1[4]        -4.735 1.001  3000
#> d.1[5]       -11.118 1.001  3000
#> d.1[6]        -6.128 1.001  3000
#> d.1[7]        -4.189 1.001  3000
#> d.1[8]        -1.991 1.001  2600
#> d.1[9]        -5.417 1.001  3000
#> d.1[10]       -3.131 1.004   540
#> d.1[11]      -14.328 1.002  1100
#> d.1[12]       -7.104 1.001  3000
#> d.1[13]       -8.714 1.003   760
#> d.1[14]       -3.668 1.001  3000
#> d.1[15]       -5.991 1.001  3000
#> d.1[16]       -1.825 1.001  3000
#> d.1[17]       -3.759 1.001  3000
#> d.1[18]       -4.880 1.004   640
#> d.1[19]       -7.646 1.001  3000
#> d.2[1]         0.000 1.000     1
#> d.2[2]        30.409 1.001  3000
#> d.2[3]        30.409 1.001  3000
#> d.2[4]        30.409 1.001  3000
#> d.2[5]        30.409 1.001  3000
#> d.2[6]        62.567 1.002  1200
#> d.2[7]        22.338 1.001  3000
#> d.2[8]        22.338 1.001  3000
#> d.2[9]        22.338 1.001  3000
#> d.2[10]       22.338 1.001  3000
#> d.2[11]       42.336 1.002  1400
#> d.2[12]       26.620 1.001  2400
#> d.2[13]       26.620 1.001  2400
#> d.2[14]       26.620 1.001  2400
#> d.2[15]       26.620 1.001  2400
#> d.2[16]       36.984 1.003   860
#> d.2[17]       36.984 1.003   860
#> d.2[18]       36.984 1.003   860
#> d.2[19]       36.984 1.003   860
#> rho            0.000 1.000     1
#> sd.beta.2     15.377 1.002  1800
#> totresdev 150604.197 1.000     1
#> deviance  149910.009 1.000     1
#>
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#>
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 85.1 and DIC = 149966.4
#> DIC is an estimate of expected predictive error (lower deviance is better).

Two useful packages for investigating convergence of monitored parameters are the mcmcplots and coda packages. These can be used to generate plots that can provide information on MCMC chain overlap and autocorrelation:

# Traceplots
mcmcplots::traplot(mbnma, "sd.beta.1")

# Running mean plots
mcmcplots::rmeanplot(mbnma, "sd.beta.1")

# Posterior densities
mcmcplots::denplot(mbnma, "sd.beta.1")

# Autocorrelation plots
coda::autocorr.plot(mbnma)

A single function (mcmcplots::mcmcplot()) can also be used to generate an HTLM with all these plots for all monitored parameters.

If there are problems with convergence, the first solution may be to run the model for more iterations (see Arguments to be sent to JAGS), which give the chains a greater number of calculations over which to converge. However, for models with a large number of parameters and relatively limited data, information may be too limited to allow convergence within a computationally reasonable number of iterations. In these situations there is not enough information in the data to support such a highly parameterised model, suggesting that a simpler model should be fitted. Alternatively, more informative priors can be used, but given the lack of information in the data relative to the model’s complexity, results are likely to be sensitive to these priors.

For a detailed review of MCMC convergence assessment see Sinharay (2003).

## Post-Estimation

### Deviance plots

To assess how well a model fits the data, it can be useful to look at a plot of the contributions of each data point to the total deviance or residual deviance. This can be done using devplot(). As individual deviance contributions are not automatically monitored in the model, this might require the model to be run for additional iterations.

Results can be plotted either as a scatter plot (plot.type="scatter") or a series of boxplots (plot.type="box").

# Run a first-order fractional polynomial time-course MBNMA
mbnma <- mb.run(network.pain,
fun=tfpoly(degree=1,
pool.1="rel", method.1="random",
method.power1="common"))

# Plot a box-plot of deviance contributions (the default)
devplot(mbnma, n.iter=1000)
#> dev not monitored in mbnma$parameters.to.save. #> additional iterations will be run in order to obtain results for dev From these plots we can see that whilst the model fit is good at later time points, it frequently underestimates responses at earlier time points. A function that appropriately captures the time-course shape should show a reasonably flat shape of deviance contributions (i.e. contributions should be similar across all time points). If saved to an object, the output of devplot() contains the results for individual deviance contributions, and this can be used to identify any extreme outliers. ### Fitted values Another approach for assessing model fit can be to plot the fitted values, using fitplot(). As with devplot(), this may require running additional model iterations to monitor theta. # Plot fitted and observed values with treatment labels fitplot(mbnma, n.iter=1000) Fitted values are plotted as connecting lines and observed values in the original dataset are plotted as points. These plots can be used to identify if the model fits the data well for different treatments and at different parts of the time-course. ### Forest plots Forest plots can be easily generated from MBNMA models using the plot() method on an "mbnma" object. By default this will plot a separate panel for each time-course parameter in the model. Forest plots can only be generated for parameters which vary by treatment/class. # Run a quadratic time-course MBNMA using the alogliptin dataset mbnma <- mb.run(network.alog, fun=tpoly(degree=2, pool.1="rel", method.1="random", pool.2="rel", method.2="common" ) ) plot(mbnma) ### get.relative(): Calculating differences between treatments at a specified time-point Although mb.run() estimates the effects for different treatments on different time-course parameters, these are not necessarily easy to draw conclusions from, particularly for time-course functions with less easily interpretable parameters. get.relative() allows users to calculate mean differences between treatments at a specified time-point even if a subset, or even none of the treatments have been investigated at that time-point in included RCTs. These results will then be reported on the scale on which the data were modelled (i.e. depending on the link function specified in mb.run()), rather than that of the specific time-course parameters. Within the matrices of results, mean differences/relative effects are shown as the row-defined treatment versus the column-defined treatment. allres <- get.relative(mbnma, time=20, treats = c("alog_100", "alog_50", "placebo")) print(allres) #> ======================================== #> Treatment comparisons at time = 20 #> ======================================== #> #> alog_100 0.89 (0.35, 1.4) 0.94 (-0.47, 2.4) #> -0.89 (-1.4, -0.35) alog_50 0.06 (-1.4, 1.5) #> -0.94 (-2.4, 0.47) -0.06 (-1.5, 1.4) placebo ### rank(): Ranking Rankings can be calculated for different time-course parameters from MBNMA models by using rank() on an "mbnma" object. Any parameter monitored in an MBNMA model that varies by treatment/class can be ranked. A vector of these is assigned to params. lower_better indicates whether negative scores should be ranked as “better” (TRUE) or “worse” (FALSE) In addition, it is possible to rank the Area Under the Curve (AUC) for a particular treatment by adding "auc" to the vector of params (included as the default). This will calculate the area under the predicted response over time, and will therefore be a function of all the time-course parameters in the model simultaneously. However, it will be dependent on the range of times chosen to integrate over (int.range), and a different choice of time-frame may lead to different treatment rankings. "auc" can also not currently be calculated from MBNMA models with more complex time-course functions (piecewise, fractional polynomials), nor with MBNMA models that use class effects. # Identify quantile for knot at 1 week timequant <- 1/max(network.pain$data.ab$time) # Run a piecewise linear time-course MBNMA with a knot at 1 week mbnma <- mb.run(network.pain, fun=tspline(type="ls", knots = timequant, pool.1 = "rel", method.1="common", pool.2 = "rel", method.2="common")) # Rank results based on AUC (calculated 0-10 weeks), more negative slopes considered to be "better" ranks <- rank(mbnma, params=c("auc", "d.2"), int.range=c(0,10), lower_better = TRUE, n.iter=1000) print(ranks) #> #> ======================================== #> Treatment rankings #> ======================================== #> #> d.2 ranking #> #> |Treatment | Mean| Median| 2.5%| 97.5%| #> |:---------|-----:|------:|----:|-----:| #> |Pl_0 | 8.66| 9| 5| 12.03| #> |Ce_100 | 17.41| 18| 7| 26.00| #> |Ce_200 | 16.08| 16| 11| 21.00| #> |Ce_400 | 17.97| 19| 8| 26.00| #> |Du_90 | 8.59| 3| 1| 29.00| #> |Et_10 | 23.88| 27| 4| 29.00| #> |Et_30 | 14.88| 15| 7| 23.03| #> |Et_5 | 4.67| 3| 1| 23.03| #> |Et_60 | 24.43| 25| 15| 29.00| #> |Et_90 | 19.76| 23| 3| 29.00| #> |Lu_100 | 12.46| 12| 6| 20.00| #> |Lu_200 | 13.88| 14| 8| 21.00| #> |Lu_400 | 14.63| 14| 8| 22.00| #> |Lu_NA | 6.54| 2| 1| 28.00| #> |Na_1000 | 21.52| 22| 16| 26.00| #> |Na_1500 | 18.93| 19| 11| 26.00| #> |Na_250 | 20.90| 25| 3| 29.00| #> |Na_750 | 22.21| 23| 12| 28.00| #> |Ox_44 | 7.43| 4| 1| 26.00| #> |Ro_12 | 21.63| 25| 4| 29.00| #> |Ro_125 | 12.26| 8| 1| 29.00| #> |Ro_25 | 24.80| 26| 13| 29.00| #> |Tr_100 | 5.99| 6| 2| 12.00| #> |Tr_200 | 7.48| 7| 3| 16.00| #> |Tr_300 | 10.00| 9| 4| 19.00| #> |Tr_400 | 7.02| 6| 2| 17.03| #> |Va_10 | 21.38| 23| 8| 28.00| #> |Va_20 | 16.16| 16| 4| 27.00| #> |Va_5 | 13.45| 13| 3| 25.00| #> #> #> auc ranking #> #> |Treatment | Mean| Median| 2.5%| 97.5%| #> |:---------|-----:|------:|-----:|-----:| #> |Pl_0 | 26.97| 27| 26.00| 28.00| #> |Ce_100 | 22.31| 23| 16.00| 26.00| #> |Ce_200 | 13.95| 14| 10.00| 18.00| #> |Ce_400 | 13.15| 13| 5.00| 21.00| #> |Du_90 | 17.60| 19| 6.00| 25.00| #> |Et_10 | 27.67| 28| 21.00| 29.00| #> |Et_30 | 6.08| 6| 3.00| 12.00| #> |Et_5 | 13.37| 12| 2.00| 28.00| #> |Et_60 | 3.43| 3| 1.00| 7.00| #> |Et_90 | 6.65| 4| 1.00| 25.00| #> |Lu_100 | 13.85| 14| 9.00| 19.00| #> |Lu_200 | 15.71| 16| 10.00| 21.00| #> |Lu_400 | 9.33| 9| 4.98| 15.00| #> |Lu_NA | 11.50| 11| 4.00| 20.00| #> |Na_1000 | 7.57| 7| 4.00| 12.00| #> |Na_1500 | 8.89| 8| 4.00| 16.00| #> |Na_250 | 28.28| 29| 25.00| 29.00| #> |Na_750 | 19.72| 20| 11.00| 26.00| #> |Ox_44 | 4.52| 3| 1.00| 19.00| #> |Ro_12 | 20.31| 23| 4.00| 28.00| #> |Ro_125 | 1.55| 1| 1.00| 6.02| #> |Ro_25 | 16.63| 18| 4.00| 26.00| #> |Tr_100 | 24.15| 24| 20.00| 27.00| #> |Tr_200 | 23.20| 23| 19.00| 26.00| #> |Tr_300 | 15.81| 16| 8.00| 22.00| #> |Tr_400 | 11.27| 11| 4.00| 21.00| #> |Va_10 | 21.82| 23| 13.00| 26.00| #> |Va_20 | 13.24| 13| 4.00| 22.00| #> |Va_5 | 16.43| 17| 5.00| 24.00| The output is an object of class("mb.rank"), containing a list for each ranked parameter in params, which consists of a summary table of rankings and raw information on treatment ranking and probabilities. The summary median ranks with 95% credible intervals can be simply displayed using print(). Histograms for ranking results can also be plotted using the plot() method, which takes the raw MCMC ranking results given in rank.matrix and plots the number of MCMC iterations the parameter value for each treatment was ranked a particular position. # Ranking histograms for AUC plot(ranks, params = "auc") Cumulative rankograms indicating the probability of each treatment being ranked 1st, 2nd, etc. for each ranked parameter can also be plotted using cumrank(). These can be used to easily compare how different treatments rank for each ranked parameter simultaneously. By default, the Surface Under the Cumulative Ranking curve (SUCRA) are also returned for each treatment and ranked parameter (Salanti, Ades, and Ioannidis 2011). # Cumulative ranking for all ranked parameters cumrank(ranks) #> # A tibble: 58 x 3 #> treatment parameter sucra #> <fct> <chr> <dbl> #> 1 Pl_0 auc 2.53 #> 2 Pl_0 d.2 20.8 #> 3 Ce_100 auc 7.19 #> 4 Ce_100 d.2 12.1 #> 5 Ce_200 auc 15.5 #> 6 Ce_200 d.2 13.4 #> 7 Ce_400 auc 16.3 #> 8 Ce_400 d.2 11.5 #> 9 Du_90 auc 11.9 #> 10 Du_90 d.2 20.8 #> # ... with 48 more rows ### Prediction After performing an MBNMA, responses can be predicted from the parameter estimates using predict() on an "mbnma" object. A number of important parameters need to be identified to make robust predictions, though defaults can be used to generate a plot that gives a good indication of the time-course relationship assuming a reference treatment response of zero. For further information the help file can be accessed using ?predict.mbnma. One key parameter is E0, which defines what value(s) to use for the predicted response at time = 0. A single numeric value can be given for this to indicate a deterministic value, or a function representing a random number generator (RNG) distribution in R (stochastic) (e.g. E0 = ~rnorm(n, 7, 0.2). These values can be identified for the population of interest from external data (e.g. observational/registry). The more challenging parameter(s) to identify are those for the network reference treatment response, supplied to predict() in the ref.resp argument. Typically in an MBNMA, relative effects are estimated and the network reference effect is modelled as a nuisance parameter. Therefore we need to provide an input for this reference treatment effect for all time-course parameters modelled using pool="rel" so that we can apply the relative effects estimated in our model to it. There are two options for providing these values. The first approach is to give values for each time-course parameter modelled using relative effects to ref.resp. This is given as a list, with a separate named element for each time-course parameter. Each element can take either a single numeric value (deterministic), or a function representing a random number generator distribution in R (stochastic). # Run an Emax time-course MBNMA using the osteoarthritis dataset mbnma <- mb.run(network.pain, fun=temax(pool.emax="rel", method.emax="common", pool.et50="abs", method.et50="common"), rho="dunif(0,1)", covar="varadj") # Specify placebo time-course parameters ref.params <- list(emax=-2) # Predict responses for a selection of treatments using a stochastic E0 and # placebo parameters defined in ref.params to estimate the network reference treatment effect pred <- predict(mbnma, treats=c("Pl_0", "Ce_200", "Du_90", "Et_60", "Lu_400", "Na_1000", "Ox_44", "Ro_25", "Tr_300", "Va_20"), E0=~rnorm(n, 8, 0.5), ref.resp=ref.params) print(pred) The second is to assign ref.resp a data frame composed of single-arm studies of the network reference treatment. A separate synthesis model for the reference treatment effect will then be run, and the values from this used as the prediction reference treatment effect. This dataset could be a series of observational studies measured at multiple follow-up times that closely match the population of interest for the prediction. Alternatively it could be a subset of data from the original RCT dataset used for the MBNMA model (though this may be less generalisable to the population of interest). # Generate a dataset of network reference treatment responses over time placebo.df <- network.pain$data.ab[network.pain$data.ab$treatment==1,]

# Predict responses for a selection of treatments using a deterministic E0 and
#placebo.df to model the network reference treatment effect
pred <- predict(mbnma, treats=c("Pl_0", "Ce_200", "Du_90", "Et_60",
"Lu_400", "Na_1000", "Ox_44", "Ro_25",
"Tr_300", "Va_20"),
E0=10, ref.resp=placebo.df)

print(pred)

It is also possible specify the time points for which to predict responses (times), given as a vector of positive numbers. If left as the default then the maximum follow-up in the dataset will be used as the upper limit for the range of predicted time-points.

An object of class "mb.predict" is returned, which is a list of summary tables and MCMC prediction matrices for each treatment, in addition to the original mbnma object. The summary() method can be used to print mean posterior predictions at each time point for each treatment.

Predicted responses can also be plotted using the plot() method on an object of class("mb.predict"). Within the default arguments, the median predicted network reference treatment response is overlaid on the predicted response for each treatment. Setting overlay.ref = FALSE prevents this and causes the network reference treatment predicted response to be plotted as a separate panel. Shaded counts of observations in the original dataset at each predicted time point can be plotted over the 95% CrI for each treatment by setting disp.obs = TRUE.

plot(pred, overlay.ref=TRUE, disp.obs=TRUE)

This can be used to identify any extrapolation/interpretation of the time-course that might be occurring for a particular treatment, and where predictions might therefore be problematic.

To illustrate a situation in which this could be very informative, we can look at predicted responses for a quadratic time-course function fitted to the Obesity dataset:

# Fit a quadratic time-course MBNMA to the Obesity dataset
network.obese <- mb.network(obesityBW_CFB, reference = "plac")

mbnma <- mb.run(network.obese,
fun=tpoly(degree=2,
pool.1 = "rel", method.1="common",
pool.2="rel", method.2="common"))

# Define stochastic values centred at zero for network reference treatment
ref.params <- list(beta.1=~rnorm(n, 0, 0.05), beta.2=~rnorm(n, 0, 0.0001))

# Predict responses over the
pred.obese <- predict(mbnma, times=c(0:50), E0=100, treats = c(1,4,15),
ref.resp=ref.params)

# Plot predictions
plot(pred.obese, disp.obs = TRUE)

As you can see, within the limits of the observed data the predicted responses appear reasonable. However, extrapolation beyond this for dexf_30MG leads to some rather strange results, suggesting an unrealistically huge increase in body weight after 50 weeks of treatment. On the other hand, the predicted response at 50 weeks follow-up in treatment 15 is within the limits of the observed data and so are likely to be more justifiable.

#### Plotting “lumped” NMA results

As a further addition to the plots of MBNMA predictions, it is possible to add predicted results from an NMA model. This is one in which time-points within an interval (specified in overlay.nma) are “lumped” together to allow for analysis using standard NMA approaches (Lu and Ades 2004). Either a "random" (the default) or "common" effects NMA can be specified, and model fit statistics are reported below the resulting plot.

This can be useful to assess if the MBNMA predictions are in agreement with predictions from a lumped NMA model over a specific set of time-points, and can be a general indicator of the fit of the time-course model. However, it is important to note that the NMA model is not necessarily the more robust model, since it ignores potential differences in treatment effects that may arise from lumping time-points together. The wider the range specified in overlay.nma, the greater the effect of lumping and the stronger the assumption of similarity between studies.

The NMA predictions are plotted over the range specified in overlay.nma as a horizontal line, with the 95%CrI shown by a grey rectangle. The NMA predictions represent those for any time-points within this range since they lump together data at all these time-points. Predictions for treatments that are disconnected from the network reference treatment at data points specified within overlay.nma cannot be estimated so are not included.

# Overlay predictions from lumped NMA between 8-10 weeks follow-up
plot(pred, overlay.nma=c(8,10), n.iter=20000)
#> Reference treatment in plots is Pl_0

## Consistency Testing

When performing an MBNMA by pooling relative treatment effects (pool="rel"), the modelling approach assumes consistency between direct and indirect evidence within a network. This is an incredibly useful assumption as it allows us to improve precision on existing direct estimates, or to estimate relative effects between treatments that have not been compared in head-to-head trials, by making use of indirect evidence.

However, if this assumption does not hold it is extremely problematic for inference, so it is important to be able to test it. A number of different approaches exist to allow for this in standard Network Meta-Analysis (Dias et al. 2013). Two of these have been implemented within MBNMAtime. It is important to note that in some model specifications there is likely to be sharing of model parameters (e.g. heterogeneity parameters, correlation coefficients) across networks which will lead to more conservative tests for consistency, and may lead to an inflated type II error.

Consistency is also likely to differ depending on the model used. Failing to appropriately model the time-course function may in fact induce inconsistency in the data. “Lumping” together different time points from studies in standard NMA is known to be a potential cause of inconsistency, which is one of the reasons why accounting for time-course using MBNMA is important (Pedder et al. 2019). When performing MBNMA, this is why it is important to first try to identify the best model possible in terms of time-course and common/random effects, and then to test for consistency within that model, rather than testing for consistency in models that are known not be be a good fit to the data.

Consistency testing can only be performed in networks in which closed loops of treatment comparisons exist that are drawn from independent sources of evidence. In networks which do not have any such loops of evidence, consistency cannot be formally tested (though it may still be present). The mb.nodesplit.comparisons() function identifies loops of evidence that conform to this property, and identifies a treatment comparison within that loop for which direct and indirect evidence can be compared using node-splitting (see below).

# Loops of evidence within the alogliptin dataset
splits.alog <- mb.nodesplit.comparisons(network.alog)
print(splits.alog)
#>   t1 t2    path
#> 8  3  4 3->1->4
#> 7  2  5 2->1->5
#> 6  2  4 2->1->4
#> 5  2  3 2->1->3

### Unrelated Mean Effects (UME) models

To check for consistency using UME we fit a model that does not assume consistency relationships, and that only models the direct relative effects between each arm in a study and the study reference treatment. If the consistency assumption holds true then the results from the UME model and the MBNMA will be very similar. However, if there is a discrepancy between direct and indirect evidence in the network, then the consistency assumption may not be valid, and the UME results are likely differ in several ways:

• The UME model may provide a better fit to the data, as measured by deviance or residual deviance
• The between-study SD for different parameters may be lower in the UME model
• Individual relative effects may differ in magnitude or (more severely) in direction for different treatment comparisons between UME and MBNMA models

UME can be fitted to any time-course parameter which has been modelled using relative effects (pool="rel"). UME can be specified for each time-course parameter in separate analyses, or can be modelled all at once in a single analysis.

# Identify quantile for knot at 0.5 weeks
timequant <- 0.5/max(network.pain$data.ab$time)

# Fit a B-spline MBNMA with common relative effects on slope.1 and slope.2
mbnma <- mb.run(network.pain,
fun=tspline(type="bs", knots=timequant,
pool.1 = "rel", method.1="common",
pool.2 = "rel", method.2="common"
))

# Fit a UME model on both spline coefficients simultaneously
ume <- mb.run(network.pain,
fun=tspline(type="bs", knots=timequant,
pool.1 = "rel", method.1="common",
pool.2 = "rel", method.2="common"
),
UME=TRUE)

# Fit a UME model on the 1nd coefficient only
ume.slope.1 <- mb.run(network.pain,
fun=tspline(type="bs", knots=timequant,
pool.1 = "rel", method.1="common",
pool.2 = "rel", method.2="common"
),
UME="beta.1")

# Fit a UME model on the 2nd coefficient only
ume.slope.2 <- mb.run(network.pain,
fun=tspline(type="bs", knots=timequant,
pool.1 = "rel", method.1="common",
pool.2 = "rel", method.2="common"
),
UME="beta.2")
#> [1] "Deviance for mbnma: -110.54"
#> [1] "Deviance for ume on beta.1 and beta.2: -118.16"
#> [1] "Deviance for ume on beta.1: -117.51"
#> [1] "Deviance for uyme on beta.2: -118.04"

By comparing the deviance (or residual deviance) of models with UME fitted on different time-course parameters and the MBNMA model, we can see that there is some reduction in deviance in the different UME models. Given that deviance is lowest when UME is modelled only on beta.1 this is suggestive of inconsistency between direct and indirect evidence on beta.1, but perhaps also on beta.2 given that modelling UME on this also leads to a reduction in deviance.

Direct estimates from UME and MBNMA models can also be compared to examine in greater detail how inconsistency may be affecting results. However, it is important to note that whilst a discrepancy between UME and MBNMA results may be seen for a particular relative effect, the inconsistency is not exclusively applicable to that particular treatment comparison and may originate from other comparisons in the network. This is why consistency checking is so important, as a violation of the consistency assumption raises concerns about estimates for all treatments within the network.

### Node-splitting

Another approach for consistency checking is node-splitting. This splits contributions for a particular treatment comparison into direct and indirect evidence, and the two can then be compared to test their similarity. mb.nodesplit() takes similar arguments to mb.run() that define the underlying MBNMA model in which to test for consistency, and returns an object of class("mb.nodesplit"). There are two additional arguments required:

comparisons indicates on which treatment comparisons to perform a node-split. The default value for this is to automatically identify all comparisons for which both direct and indirect evidence contributions are available using mb.nodesplit.comparisons().

nodesplit.parameters indicates on which time-course parameters to perform a node-split. This can only take time-course parameters that have been assigned relative effects in the model (pool="rel"). Alternatively the default "all" can be used to split on all available time-course parameters in the model that have been pooled using relative effects.

As up to two models will need to be run for each treatment comparison to split, this function can take some time to run.

# Nodesplit using an Emax MBNMA
nodesplit <- mb.nodesplit(network.pain,
fun=temax(pool.emax="rel", method.emax = "random",
pool.et50="abs", method.et50 = "common"),
nodesplit.parameters="all"
)
print(nodesplit)
#> ========================================
#> Node-splitting analysis of inconsistency
#> ========================================
#>
#> emax
#>
#> |Comparison        | p-value| Median|   2.5%| 97.5%|
#> |:-----------------|-------:|------:|------:|-----:|
#> |Ro_25 vs Ce_200   |   0.440|       |       |      |
#> |-> direct         |        |  0.368| -0.419| 1.183|
#> |-> indirect       |        |  0.466| -0.317| 1.217|
#> |                  |        |       |       |      |
#> |Na_1000 vs Ce_200 |   0.357|       |       |      |
#> |-> direct         |        |  0.219| -0.204| 0.688|
#> |-> indirect       |        |  0.403| -0.367| 1.176|
#> |                  |        |       |       |      |

Performing the print() method on an object of class("mb.nodesplit") prints a summary of the node-split results to the console, whilst the summary() method will return a data frame of posterior summaries for direct and indirect estimates for each split treatment comparison and each time-course parameter.

It is possible to generate different plots of each node-split comparison using plot():

# Plot forest plots of direct and indirect results for each node-split comparison
plot(nodesplit, plot.type="forest")

# Plot posterior densities of direct and indirect results for each node-split comparisons
plot(nodesplit, plot.type="density")

As a further example, if we use a different time-course function (1-parameter exponential) that is a less good fit for the data, and perform a node-split on the rate time-course parameter, we find that there seems to be a strong discrepancy between direct and indirect estimates. This is strong evidence to reject the consistency assumption, and to either (as in this case) try to identify a better fitting model, or to re-examine the dataset to try to explain whether differences in studies making different comparisons may be causing this.

This highlights the importance of testing for consistency after identifying an appropriate time-course and common/random treatment effects model.

# Nodesplit on emax of 1-parameter exponential MBNMA
ns.exp <- mb.nodesplit(network.pain,
fun=texp(pool.emax = "rel", method.emax="common"),
nodesplit.parameters="all")
print(ns.exp)
#> ========================================
#> Node-splitting analysis of inconsistency
#> ========================================
#>
#> emax
#>
#> |Comparison        | p-value| Median|   2.5%| 97.5%|
#> |:-----------------|-------:|------:|------:|-----:|
#> |Ro_25 vs Ce_200   |   0.162|       |       |      |
#> |-> direct         |        |  0.156| -0.218| 0.494|
#> |-> indirect       |        |  0.367|  0.145| 0.579|
#> |                  |        |       |       |      |
#> |Na_1000 vs Ce_200 |   0.000|       |       |      |
#> |-> direct         |        | -0.033| -0.177| 0.107|
#> |-> indirect       |        |  0.406|  0.209| 0.588|
#> |                  |        |       |       |      |

plot(ns.exp, plot.type="forest")

## Conclusions

MBNMAtime provides a complete set of functions that allow for meta-analysis of longitudinal time-course data and plotting of a number of informative graphics. Functions are provided for ranking, prediction, and for assessing consistency when modelling using relative effects. By accounting for time-course in meta-analysis this can help to explain heterogeneity/inconsistency that may arise when lumping together different time-points using conventional NMA.

The package allows for flexible modelling of either relative or absolute effects interchangeably on different time-course parameters within the same analysis, whilst providing a straightforward syntax with which to define these models.

## References

Dias, S., and A. E. Ades. 2016. “Absolute or Relative Effects? Arm-Based Synthesis of Trial Data.” Journal Article. Res Synth Methods 7 (1): 23–28. https://doi.org/10.1002/jrsm.1184.
Dias, S., N. J. Welton, A. J. Sutton, D. M. Caldwell, G. Lu, and A. E. Ades. 2013. “Evidence Synthesis for Decision Making 4: Inconsistency in Networks of Evidence Based on Randomized Controlled Trials.” Journal Article. Med Decis Making 33 (5): 641–56. https://doi.org/10.1177/0272989X12455847.
Gelman, A., J. B. Carlin, H. S. Stern, and D. B. Rubin. 2003. Bayesian Data Analysis. Book. 2nd ed. CRC Press.
Hong, H., H. Chu, J. Zhang, and B. P. Carlin. 2016. “A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons.” Journal Article. Res Synth Methods 7 (1): 6–22. https://doi.org/10.1002/jrsm.1153.
JAGS Computer Program. 2017. https://mcmc-jags.sourceforge.io/.
Jansen, J. P., M. C. Vieira, and S. Cope. 2015. “Network Meta-Analysis of Longitudinal Data Using Fractional Polynomials.” Journal Article. Stat Med 34 (15): 2294–2311. https://doi.org/10.1002/sim.6492.
Karabis, A., L. Lindner, M. Mocarski, E. Huisman, and A. Greening. 2013. “Comparative Efficacy of Aclidinium Versus Glycopyrronium and Tiotropium, as Maintenance Treatment of Moderate to Severe COPD Patients: A Systematic Review and Network Meta-Analysis.” Journal Article. Int J Chron Obstruct Pulmon Dis 8: 405–23. https://doi.org/10.2147/COPD.S48967.
Karahalios, A. E., G. Salanti, S. L. Turner, G. P. Herbison, I. R. White, A. A. Veroniki, A. Nikolakopoulou, and J. E. McKenzie. 2017. “An Investigation of the Impact of Using Different Methods for Network Meta-Analysis: A Protocol for an Empirical Evaluation.” Journal Article. Syst Rev 6 (1): 119. https://doi.org/10.1186/s13643-017-0511-x.
Kincaid, Chuck. 2005. “Guidelines for Selecting the Covariance Structure in Mixed Model Analysis.” Paper 198-30. COMSYS Information Technology Services. https://support.sas.com/resources/papers/proceedings/proceedings/sugi30/198-30.pdf.
Langford, O., J. K. Aronson, G. van Valkenhoef, and R. J. Stevens. 2016. “Methods for Meta-Analysis of Pharmacodynamic Dose-Response Data with Application to Multi-Arm Studies of Alogliptin.” Journal Article. Stat Methods Med Res. https://doi.org/10.1177/0962280216637093.
Lu, G., and A. E. Ades. 2004. “Combination of Direct and Indirect Evidence in Mixed Treatment Comparisons.” Journal Article. Stat Med 23 (20): 3105–24. https://doi.org/10.1002/sim.1875.
Pedder, H., M. Boucher, S. Dias, M. Bennetts, and N. J. Welton. 2020. “Performance of Model-Based Network Meta-Analysis (MBNMA) of Time-Course Relationships: A Simulation Study.” Journal Article. Research Synthesis Methods 11 (5): 678–97. https://doi.org/10.1002/jrsm.1432.
Pedder, H., S. Dias, M. Bennetts, M. Boucher, and N. J. Welton. 2019. “Modelling Time-Course Relationships with Multiple Treatments: Model-Based Network Meta-Analysis for Continuous Summary Outcomes.” Journal Article. Res Synth Methods 10 (2): 267–86.
Plummer, M. 2008. “Penalized Loss Functions for Bayesian Model Comparison.” Journal Article. Biostatistics 9 (3): 523–39. https://pubmed.ncbi.nlm.nih.gov/18209015/.
———. 2017. JAGS User Manual. Version 4.3.0. https://people.stat.sc.edu/hansont/stat740/jags_user_manual.pdf.
Salanti, G., A. E. Ades, and J. P. Ioannidis. 2011. “Graphical Methods and Numerical Summaries for Presenting Results from Multiple-Treatment Meta-Analysis: An Overview and Tutorial.” Journal Article. J Clin Epidemiol 64 (2): 163–71. https://doi.org/10.1016/j.jclinepi.2010.03.016.
Sinharay, S. 2003. “Assessing Convergence of the Markov Chain Monte Carlo Algorithms: A Review.” RR-03-07. ETS Research Report Series. https://onlinelibrary.wiley.com/doi/pdf/10.1002/j.2333-8504.2003.tb01899.x.
Spiegelhalter, D. J., N. G. Best, B. P. Carlin, and A. van der Linde. 2002. “Bayesian Measures of Model Complexity and Fit.” Journal Article. J R Statistic Soc B 64 (4): 583–639.
Tallarita, M., M. De lorio, and G. Baio. 2019. “A Comparative Review of Network Meta-Analysis Models in Longitudinal Randomized Controlled Trial.” Journal Article. Statistics in Medicine 38 (16): 3053–72. https://doi.org/10.1002/sim.8169.