From TaiChimd
Jump to: navigation, search



Statistics for biologists

Box/Box and whisker plot in R

See for a numerical explanation how boxplot() in R works.

> x=c(0,4,15, 1, 6, 3, 20, 5, 8, 1, 3)
> summary(x)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
      0       2       4       6       7      20 
> sort(x)
 [1]  0  1  1  3  3  4  5  6  8 15 20
> boxplot(x, col = 'grey')


  • The lower and upper edges of box is determined by the first and 3rd quartiles (2 and 7 in the above example).
  • The thick dark horizon line is the median (4 in the example).
  • Outliers are defined by observations larger than 3rd quartile + 1.5 * IQR (7+1.5*5=14.5) and smaller than 1st quartile - 1.5 * IQR (2-1.5*5=-5.5). See the empty circles in the plot.
  • Upper whisker is defined by the largest data below 3rd quartile + 1.5 * IQR (8 in this example), and the lower whisker is defined by the smallest data greater than 1st quartile - 1.5 * IQR (0 in this example).

Note the wikipedia lists several possible definitions of a whisker. R uses the 2nd method (Tukey boxplot) to define whiskers.

stem and leaf plot

stem(). See R Tutorial].

BoxCox transformation

Finding transformation for normal distribution

the Holy Trinity (LRT, Wald, Score tests)

Don't invert that matrix

Linear Regression

Regression Models for Data Science in R by Brian Caffo


Different models (in R)

dummy.coef.lm() in R

Extracts coefficients in terms of the original levels of the coefficients rather than the coded variables.

Contrasts in linear regression


Multicollinearity in R


Confidence interval vs prediction interval

Confidence intervals tell you about how well you have determined the mean E(Y). Prediction intervals tell you where you can expect to see the next data point sampled. That is, CI is computed using Var(E(Y|X)) and PI is computed using Var(E(Y|X) + e).

Non- and semi-parametric regression

k-Nearest neighbor regression

  • k-NN regression in practice: boundary problem, discontinuities problem.
  • Weighted k-NN regression: want weight to be small when distance is large. Common choices - weight = kernel(xi, x)

Kernel regression

  • Instead of weighting NN, weight ALL points. Nadaraya-Watson kernel weighted average:

\hat{y}_q = \sum c_{qi} y_i/\sum c_{qi} = \frac{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))*y_i}{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))} .

  • Choice of bandwidth \lambda for bias, variance trade-off. Small \lambda is over-fitting. Large \lambda can get an over-smoothed fit. Cross-validation.
  • Kernel regression leads to locally constant fit.
  • Issues with high dimensions, data scarcity and computational complexity.

Principal component analysis

R source code

> stats:::prcomp.default
function (x, retx = TRUE, center = TRUE, scale. = FALSE, tol = NULL, 
    x <- as.matrix(x)
    x <- scale(x, center = center, scale = scale.)
    cen <- attr(x, "scaled:center")
    sc <- attr(x, "scaled:scale")
    if (any(sc == 0)) 
        stop("cannot rescale a constant/zero column to unit variance")
    s <- svd(x, nu = 0)
    s$d <- s$d/sqrt(max(1, nrow(x) - 1))
    if (!is.null(tol)) {
        rank <- sum(s$d > (s$d[1L] * tol))
        if (rank < ncol(x)) {
            s$v <- s$v[, 1L:rank, drop = FALSE]
            s$d <- s$d[1L:rank]
    dimnames(s$v) <- list(colnames(x), paste0("PC", seq_len(ncol(s$v))))
    r <- list(sdev = s$d, rotation = s$v, center = if (is.null(cen)) FALSE else cen, 
        scale = if (is.null(sc)) FALSE else sc)
    if (retx) 
        r$x <- x %*% s$v
    class(r) <- "prcomp"
<bytecode: 0x000000003296c7d8>
<environment: namespace:stats>


Using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of XX⊤ can cause loss of precision.

AIC/BIC in estimating the number of components

Consistency of AIC and BIC in estimating the number of significant components in high-dimensional principal component analysis

Related to Factor Analysis

In short,

  1. In Principal Components Analysis, the components are calculated as linear combinations of the original variables. In Factor Analysis, the original variables are defined as linear combinations of the factors.
  2. In Principal Components Analysis, the goal is to explain as much of the total variance in the variables as possible. The goal in Factor Analysis is to explain the covariances or correlations between the variables.
  3. Use Principal Components Analysis to reduce the data into a smaller number of components. Use Factor Analysis to understand what constructs underlie the data.

Calculated by Hand

Do not scale your matrix


What does it do if we choose center=FALSE in prcomp()?

In USArrests data, use center=FALSE gives a better scatter plot of the first 2 PCA components.

x1 = prcomp(USArrests) 
x2 = prcomp(USArrests, center=F)
plot(x1$x[,1], x1$x[,2])  # looks random
windows(); plot(x2$x[,1], x2$x[,2]) # looks good in some sense

Relation to Multidimensional scaling/MDS

With no missing data, classical MDS (Euclidean distance metric) is the same as PCA.

Comparisons are here.

Differences are asked/answered on The post also answered the question when these two are the same.

Matrix factorization methods Review of principal component analysis (PCA), K-means clustering, nonnegative matrix factorization (NMF) and archetypal analysis (AA).

Partial Least Squares (PLS)

Independent component analysis

ICA is another dimensionality reduction method.



Correspondence analysis and the book Exploratory Multivariate Analysis by Example Using R


t-Distributed Stochastic Neighbor Embedding (t-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets.

Visualize the random effects

ROC curve and Brier score

Confusion matrix, Sensitivity/Specificity/Accuracy

1 0
True 1 TP FN Sens=TP/(TP+FN)
0 FP TN Spec=TN/(FP+TN)
NPV=TN/(FN+TN) N = TP + FP + FN + TN
  • Sensitivity = TP / (TP + FN)
  • Specificity = TN / (TN + FP)
  • Accuracy = (TP + TN) / N
  • False discovery rate FDR = FP / (TP + FP)
  • False negative rate FNR = FN / (TP + FN)
  • Positive predictive value (PPV) = TP / # positive calls = TP / (TP + FP) = 1 - FDR
  • Negative predictive value (NPV) = TN / # negative calls = TN / (FN + TN)
  • Prevalence = (TP + FN) / N.
  • Note that PPV & NPV can also be computed from sensitivity, specificity, and prevalence:
 \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence}+(1-\text{specificity}) \times (1-\text{prevalence})}
 \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{(1-\text{sensitivity}) \times \text{prevalence}+\text{specificity} \times (1-\text{prevalence})}

Precision recall curve

Incidence, Prevalence

genefilter package and rowpAUCs function

  • rowpAUCs function in genefilter package. The aim is to find potential biomarkers whose expression level is able to distinguish between two groups.
# source("")
# biocLite("genefilter")
library(Biobase) # sample.ExpressionSet data

r2 = rowpAUCs(sample.ExpressionSet, "sex", p=0.1)
plot(r2[1]) # first gene, asking specificity = .9

r2 = rowpAUCs(sample.ExpressionSet, "sex", p=1.0)
plot(r2[1]) # it won't show pAUC

r2 = rowpAUCs(sample.ExpressionSet, "sex", p=.999)
plot(r2[1]) # pAUC is very close to AUC now

Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction

Performance evaluation

Maximum likelihood

Difference of partial likelihood, profile likelihood and marginal likelihood

Generalized Linear Model

Lectures from a course in Simon Fraser University Statistics.

Doing magic and analyzing seasonal time series with GAM (Generalized Additive Model) in R

Quasi Likelihood

Quasi-likelihood is like log-likelihood. The quasi-score function (first derivative of quasi-likelihood function) is the estimating equation.


Deviance, stats::deviance() and glmnet::deviance.glmnet() from R

## an example with offsets from Venables & Ripley (2002, p.189)
utils::data(anorexia, package = "MASS")

anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt),
                family = gaussian, data = anorexia)

# Call:
#   glm(formula = Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, 
#       data = anorexia)
# Deviance Residuals: 
#   Min        1Q    Median        3Q       Max  
# -14.1083   -4.2773   -0.5484    5.4838   15.2922  
# Coefficients:
#   Estimate Std. Error t value Pr(>|t|)    
# (Intercept)  49.7711    13.3910   3.717 0.000410 ***
#   Prewt        -0.5655     0.1612  -3.509 0.000803 ***
#   TreatCont    -4.0971     1.8935  -2.164 0.033999 *  
#   TreatFT       4.5631     2.1333   2.139 0.036035 *  
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# (Dispersion parameter for gaussian family taken to be 48.69504)
# Null deviance: 4525.4  on 71  degrees of freedom
# Residual deviance: 3311.3  on 68  degrees of freedom
# AIC: 489.97
# Number of Fisher Scoring iterations: 2

# [1] 3311.263
  • In glmnet package. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the log-likelihood for the saturated model (a model with a free parameter per observation). Null deviance is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model. Hence dev.ratio=1-deviance/nulldev, and this deviance method returns (1-dev.ratio)*nulldev.
deviance(fit1)  # one for each lambda
#  [1] 98.83277 98.53893 98.29499 98.09246 97.92432 97.78472 97.66883
#  [8] 97.57261 97.49273 97.41327 97.29855 97.20332 97.12425 97.05861
# ...
# [57] 96.73772 96.73770
fit2 <- glmnet(x, y, lambda=.1) # fix lambda
# [1] 98.10212
deviance(glm(y ~ x))
# [1] 96.73762
sum(residuals(glm(y ~ x))^2)
# [1] 96.73762

Saturated model

Simulate data

Simulate data from a specified density

Signal to noise ratio

Var(f(X)) / Var(e) if Y = f(X) + e

Effect size

\theta = \frac{\mu_1 - \mu_2} \sigma,

See also the estimation by the pooled sd.

Multiple comparisons

Take an example, Suppose 550 out of 10,000 genes are significant at .05 level

  1. P-value < .05 ==> Expect .05*10,000=500 false positives
  2. False discovery rate < .05 ==> Expect .05*550 =27.5 false positives
  3. Family wise error rate < .05 ==> The probablity of at least 1 false positive <.05

False Discovery Rate

Suppose p_1 \leq p_2 \leq ... \leq p_n. Then

\text{FDR}_i = \text{min}(1, n* p_i/i)

So if the number of tests (n) is large and/or the original p value (p_i) is large, then FDR can hit the value 1.

However, the simple formula above does not guarantee the monotonicity property from the FDR. So the calculation in R is more complicated. See How Does R Calculate the False Discovery Rate.

Below is the histograms of p-values and FDR (BH adjusted) from a real data (Pomeroy in BRB-ArrayTools).

Hist bh.svg


q-value is defined as the minimum FDR that can be attained when calling that feature significant (i.e., expected proportion of false positives incurred when calling that feature significant).

If gene X has a q-value of 0.013 it means that 1.3% of genes that show p-values at least as small as gene X are false positives.

SAM/Significance Analysis of Microarrays

The percentile option is used to define the number of falsely called genes based on 'B' permutations. If we use the 90-th percentile, the number of significant genes will be less than if we use the 50-th percentile/median.

In BRCA dataset, using the 90-th percentile will get 29 genes vs 183 genes if we use median.

Multivariate permutation test

In BRCA dataset, using 80% confidence gives 116 genes vs 237 genes if we use 50% confidence (assuming maximum proportion of false discoveries is 10%). The method is published on EL Korn, JF Troendle, LM McShane and R Simon, Controlling the number of false discoveries: Application to high dimensional genomic data, Journal of Statistical Planning and Inference, vol 124, 379-398 (2004).

String Permutations Algorithm


Bayes factor

Empirical Bayes method

Naive Bayes classifier

Understanding Naïve Bayes Classifier Using R


Speeding up Metropolis-Hastings with Rcpp

Offset in Poisson regression


Var(Y) = phi * E(Y). If phi > 1, then it is overdispersion relative to Poisson. If phi <1, we have under-dispersion (rare).


The Poisson model fit is not good; residual deviance/df >> 1. The lack of fit maybe due to missing data, covariates or overdispersion.

Subjects within each covariate combination still differ greatly.

Consider Quasi-Poisson or negative binomial.

Test of overdispersion or underdispersion in Poisson models

Negative Binomial

The mean of the Poisson distribution can itself be thought of as a random variable drawn from the gamma distribution thereby introducing an additional free parameter.

Survival data

Kaplan & Meier and Nelson-Aalen: survfit.formula()

  • Landmarks
    • Kaplan-Meier: 1958
    • Nelson: 1969
    • Cox and Brewlow: 1972 S(t) = exp(-Lambda(t))
    • Aalen: 1978 Lambda(t)
  • D distinct times t_1 < t_2 < \cdots < t_D. At time t_i there are d_i events. Let Y_i be the number of individuals who are at risk at time t_i. The quantity d_i/Y_i provides an estimate of the conditional probability that an individual who survives to just prior to time t_i experiences the event at time t_i. The KM estimator of the survival function and the Nelson-Aalen estimator of the cumulative hazard are define as follows (t_1 \le t):

\hat{S}(t) &= \prod_{t_i \le t} [1 - d_i/Y_i] \\
\hat{H}(t) &= \sum_{t_i \le t} d_i/Y_i
'data.frame':	76 obs. of  7 variables:
$ id     : num  1 1 2 2 3 3 4 4 5 5 ...
$ time   : num  8 16 23 13 22 28 447 318 30 12 ...
$ status : num  1 1 1 0 1 1 1 1 1 1 ...
$ age    : num  28 28 48 48 32 32 31 32 10 10 ...
$ sex    : num  1 1 2 2 1 1 2 2 1 1 ...
$ disease: Factor w/ 4 levels "Other","GN","AN",..: 1 1 2 2 1 1 1 1 1 1 ...
$ frail  : num  2.3 2.3 1.9 1.9 1.2 1.2 0.5 0.5 1.5 1.5 ...
kidney[order(kidney$time), c("time", "status")]
kidney[kidney$time == 13, ] # one is dead and the other is alive
length(unique(kidney$time)) # 60

sfit <- survfit(Surv(time, status) ~ 1, data = kidney)
List of 13
$ n        : int 76
$ time     : num [1:60] 2 4 5 6 7 8 9 12 13 15 ...
$ n.risk   : num [1:60] 76 75 74 72 71 69 65 64 62 60 ...
$ n.event  : num [1:60] 1 0 0 0 2 2 1 2 1 2 ...
$ n.censor : num [1:60] 0 1 2 1 0 2 0 0 1 0 ...
$ surv     : num [1:60] 0.987 0.987 0.987 0.987 0.959 ...
$ type     : chr "right"
all(sapply(sfit$time, function(tt) sum(kidney$time >= tt)) == sfit$n.risk) # TRUE
all(sapply(sfit$time, function(tt) sum(kidney$status[kidney$time == tt])) == sfit$n.event) # TRUE
all(sapply(sfit$time, function(tt) sum(1-kidney$status[kidney$time == tt])) == sfit$n.censor) #  TRUE
all(cumprod(1 - sfit$n.event/sfit$n.risk) == sfit$surv) #  FALSE
range(abs(cumprod(1 - sfit$n.event/sfit$n.risk) - sfit$surv))
# [1] 0.000000e+00 1.387779e-17
  • Note that the KM estimate is left-continuous step function with the intervals closed at left and open at right. For t \in [t_j, t_{j+1}) for a certain j, we have \hat{S}(t) = \prod_{i=1}^j (1-d_i/n_i) where d_i is the number people who have an event during the interval [t_i, t_{i+1}) and n_i is the number of people at risk just before the beginning of the interval [t_i, t_{i+1}).
  • The product-limit estimator can be constructed by using a reduced-sample approach. We can estimate the P(T > t_i | T \ge t_i) = \frac{Y_i - d_i}{Y_i} for i=1,2,\cdots,D. 
S(t_i) = \frac{S(t_i)}{S(t_{i-1})} \frac{S(t_{i-1})}{S(t_{i-2})} \cdots \frac{S(t_2)}{S(t_1)} \frac{S(t_1)}{S(0)} S(0) = P(T > t_i | T \ge t_i) P(T >t_{i-1} | T \ge t_{i-1}) \cdots P(T>t_2|T \ge t_2) P(T>t_1 | T \ge t_1) because S(0)=1 and, for a discrete distribution, S(t_{i-1}) = P(T > t_{i-1}) = P(T \ge t_i).
  • Self consistency. If we had no censored observations, the estimator of the survival function at a time t is the proportion of observations which are larger than t, that is, \hat{S}(t) = \frac{1}{n}\sum I(X_i > t).
  • Curves are plotted in the same order as they are listed by print (which gives a 1 line summary of each). For example, -1 < 1 and 'Maintenance' < 'Nonmaintained'. That means, the labels list in the legend() command should have the same order as the curves.
  • Kaplan and Meier is used to give an estimator of the survival function S(t)
  • Nelson-Aalen estimator is for the cumulative hazard H(t). Note that 0 \le H(t) < \infty and H(t) \rightarrow \infty as t goes to infinity. So there is a constraint on the hazard function, see Wikipedia.

Note that S(t) is related to H(t) by 
H(t) = -ln[S(t)]. 
The two estimators are similar (see example 4.1A and 4.1B from Klein and Moeschberge).

The Nelson-Aalen estimator has two primary uses in analyzing data

  1. Selecting between parametric models for the time to event
  2. Crude estimates of the hazard rate h(t). This is related to the estimation of the survival function in Cox model. See 8.6 of Klein and Moeschberge.

The Kaplan–Meier estimator (the product limit estimator) is an estimator for estimating the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment.

The "+" sign means censored observations and a long vertical line (not '+') means there is a dead observation at that time.

If the last observation (longest survival time) is dead, the survival curve will goes down to zero. Otherwise, the survival curve will remain flat.

Usually the KM curve of treatment group is higher than that of the control group.

The Y-axis (the probability that a member from a given population will have a lifetime exceeding time) is often called

  • Cumulative probability
  • Cumulative survival
  • Percent survival
  • Probability without event
  • Proportion alive/surviving
  • Survival
  • Survival probability

KMcurve.png KMcurve cumhaz.png

> library(survival)
> str(aml$x)
 Factor w/ 2 levels "Maintained","Nonmaintained": 1 1 1 1 1 1 1 1 1 1 ...
> plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) , 
      lty=2:3, mark.time = TRUE) # a (small) subset, mark.time is used to show censored obs
> aml[7:17,]
   time status             x
7    31      1    Maintained
8    34      1    Maintained
9    45      0    Maintained
10   48      1    Maintained
11  161      0    Maintained
12    5      1 Nonmaintained
13    5      1 Nonmaintained
14    8      1 Nonmaintained
15    8      1 Nonmaintained
16   12      1 Nonmaintained
17   16      0 Nonmaintained
> legend(100, .9, c("Maintenance", "No Maintenance"), lty = 2:3) # lty: 2=dashed, 3=dotted
> title("Kaplan-Meier Curves\nfor AML Maintenance Study") 

# Cumulative hazard plot
# Lambda(t) = -log(S(t)); 
# see
plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) , 
      lty=2:3, mark.time = T, fun="cumhaz", ylab="Cumulative Hazard")
mydata <- data.frame(time=c(3,6,8,12,12,21),status=c(1,1,0,1,1,1))
km <- survfit(Surv(time, status)~1, data=mydata)
plot(km, mark.time = T)
survest <- stepfun(km$time, c(1, km$surv))
> str(km)
List of 13
 $ n        : int 6
 $ time     : num [1:5] 3 6 8 12 21
 $ n.risk   : num [1:5] 6 5 4 3 1
 $ n.event  : num [1:5] 1 1 0 2 1
 $ n.censor : num [1:5] 0 0 1 0 0
 $ surv     : num [1:5] 0.833 0.667 0.667 0.222 0
 $ type     : chr "right"
 $ std.err  : num [1:5] 0.183 0.289 0.289 0.866 Inf
 $ upper    : num [1:5] 1 1 1 1 NA
 $ lower    : num [1:5] 0.5827 0.3786 0.3786 0.0407 NA
 $ conf.type: chr "log"
 $ : num 0.95

Kmcurve toy.svg

Breslow estimate

Survival curves with number at risk at bottom: survminer package

R function survminer::ggsurvplot()

Paper examples

Hazard ratio forest plot: ggforest() from survminer

Survival curve with confidence interval

Parametric models and survival function for censored data

Assume the CDF of survival time T is F(\cdot) and the CDF of the censoring time C is G(\cdot),

P(T>t, \delta=1) &= \int_t^\infty (1-G(s))dF(s), \\
P(T>t, \delta=0) &= \int_t^\infty (1-F(s))dG(s)


Parametric models and likelihood function for uncensored data


  • Exponential.  T \sim Exp(\lambda) . H(t) = \lambda t. and ln(S(t)) = -H(t) = -\lambda t.
  • Weibull.  T \sim W(\lambda,p). H(t) = \lambda^p t^p. and ln(-ln(S(t))) = ln(\lambda^p t^p)=const + p ln(t) .

See also accelerated life models where a set of covariates were used to model survival time.

Survival modeling

Accelerated life models - a direct extension of the classical linear model and also Kalbfleish and Prentice (1980).

log T_i = x_i' \beta + \epsilon_i

  • T_i = exp(x_i' \beta) T_{0i} . So if there are two groups (x=1 and x=0), and exp(\beta) = 2, it means one group live twice as long as people in another group.
  • S_1(t) = S_0(t/ exp(x' \beta)). This explains the meaning of accelerated failure-time. Depending on the sign of \beta' x, the time is either accelerated by a constant factor or degraded by a constant factor. If exp(\beta)=2, the probability that a member in group one (eg treatment) will be alive at age t is exactly the same as the probability that a member in group zero (eg control group) will be alive at age t/2.
  • The hazard function \lambda_1(t) = \lambda_0(t/exp(x'\beta))/ exp(x'\beta) . So if exp(\beta)=2, at any given age people in group one would be exposed to half the risk of people in group zero half their age.

In applications,

  • If the errors are normally distributed, then we obtain a log-normal model for the T. Estimation of this model for censored data by maximum likelihood is known in the econometric literature as a Tobit model.
  • If the errors have an extreme value distribution, then T has an exponential distribution. The hazard \lambda satisfies the log linear model \log \lambda_i = x_i' \beta.

Proportional hazard models

Note PH models is a type of multiplicative hazard rate models h(x|Z) = h_0(x)c(\beta' Z) where c(\beta' Z) = \exp(\beta ' Z).

Assumption: Survival curves for two strata (determined by the particular choices of values for covariates) must have hazard functions that are proportional over time (i.e. constant relative hazard over time). Proportional hazards assumption meaning. The ratio of the hazard rates from two individuals with covariate value Z and Z^* is a constant function time.

\frac{h(t|Z)}{h(t|Z^*)} = \frac{h_0(t)\exp(\beta 'Z)}{h_0(t)\exp(\beta ' Z^*)} = \exp(\beta' (Z-Z^*)) \mbox{    independent of time}

Test the assumption

  • cox.zph() can be used to test the proportional hazards assumption for a Cox regression model fit.
  • Log-log Kaplan-Meier curves and other methods.
  • If the predictor satisfy the proportional hazard assumption then the graph of the survival function versus the survival time should results in a graph with parallel curves, similarly the graph of the log(-log(survival)) versus log of survival time graph should result in parallel lines if the predictor is proportional. This method does not work well for continuous predictor or categorical predictors that have many levels because the graph becomes to “cluttered”.

Cox Regression

Weibull and Exponential model to Cox model

In summary:

  • Weibull distribution (Klein) h(t) = p \lambda (\lambda t)^{p-1} and S(t) = exp(-\lambda t^p). If p >1, then the risk increases over time. If p<1, then the risk decreases over time.
  • Exponential distribution h(t) = constant (independent of t). This is a special case of Weibull distribution (p=1).
  • Weibull (and also exponential) distribution regression model is the only case which belongs to both the proportional hazards and the accelerated life families.

\frac{h(x|Z_1)}{h(x|Z_2)} = \frac{h_0(x\exp(-\gamma' Z_1)) \exp(-\gamma ' Z_1)}{h_0(x\exp(-\gamma' Z_2)) \exp(-\gamma ' Z_2)} = \frac{(a/b)\left(\frac{x \exp(-\gamma ' Z_1)}{b}\right)^{a-1}\exp(-\gamma ' Z_1)}{(a/b)\left(\frac{x \exp(-\gamma ' Z_2)}{b}\right)^{a-1}\exp(-\gamma ' Z_2)}  \quad \mbox{which is independent of time x}
f(t)=h(t)*S(t) h(t) S(t) Mean
Exponential (Klein p37) \lambda \exp(-\lambda t) \lambda \exp(-\lambda t) 1/\lambda
Weibull (Klein, wikipedia) p\lambda t^{p-1}\exp(-\lambda t^p) p\lambda t^{p-1} exp(-\lambda t^p) \frac{\Gamma(1+1/p)}{\lambda^{1/p}}
Exponential (R) \lambda \exp(-\lambda t), \lambda is rate \lambda \exp(-\lambda t) 1/\lambda
Weibull (R, wikipedia) \frac{a}{b}\left(\frac{x}{b}\right)^{a-1} \exp(-(\frac{x}{b})^a),
a is shape, and b is scale
\frac{a}{b}\left(\frac{x}{b}\right)^{a-1} \exp(-(\frac{x}{b})^a) b\Gamma(1+1/a)
  • Accelerated failure-time model. Let Y=\log(T)=\mu + \gamma'Z + \sigma W. Then the survival function of T at the covariate Z,

S_T(t|Z) &= P(T > t |Z) \\
         &= P(Y > \ln t|Z) \\
         &= P(\mu + \sigma W > \ln t-\gamma' Z | Z) \\
         &= P(e^{\mu + \sigma W} > t\exp(-\gamma'Z) | Z) \\
         &= S_0(t \exp(-\gamma'Z)).

where S_0(t) denote the survival function T when Z=0. Since h(t) = -\partial \ln (S(t)), the hazard function of T with a covariate value Z is related to a baseline hazard rate h_0 by (p56 Klein)

h(t|Z) = h_0(t\exp(-\gamma' Z)) \exp(-\gamma ' Z)
> mean(rexp(1000)^(1/2))
[1] 0.8902948
> mean(rweibull(1000, 2, 1))
[1] 0.8856265

> mean((rweibull(1000, 2, scale=4)/4)^2)
[1] 1.008923

Graphical way to check Weibull, AFT, PH

CDF follows Unif(0,1)

Take the Exponential distribution for example


Another example is from simulating survival time. Note that this is exactly Bender et al 2005 approach. See also the simsurv (newer) and survsim (older) packages.


#Define the following parameters outlined in the step: 
n = 1000 
beta_0 = 0.5
beta_1 = -1
beta_2 = 1 

b = 1.6 #This will be changed later as mentioned in Step 5 of documentation 

#Step 1
x_1<-rbinom(n, 1, 0.25)
x_2<-rbinom(n, 1, 0.7)

#Step 2 
U<-runif(n, 0,1)
T<-(-log(U)*exp(-(beta_0+beta_1*x_1+beta_2*x_2))) #Eqn (5) 

Fn <- ecdf(T) #
# verify F(T) or 1-F(T) ~ U(0, 1)
# look at the plot of survival probability vs time
plot(T, 1 - Fn(T))

Simulate survival data

Note that status = 1 means an event (e.g. death) happened; Ti <= Ci. That is, the status variable used in R/Splus means the death indicator.

n <- 30
x <- scale(1:n, TRUE, TRUE) # create covariates (standardized)
                            # the original example does not work on large 'n'
myrates <- exp(3*x+1)
y <- rexp(n, rate = myrates) # generates the r.v.
cen <- rexp(n, rate = 0.5 )  #  E(cen)=1/rate
ycen <- pmin(y, cen)
di <- as.numeric(y <= cen)
survreg(Surv(ycen, di)~x, dist="weibull")$coef[2]  # -3.080125
coxph(Surv(ycen, di)~x)$coef  # 2.457466 

# no censor
survreg(Surv(y,rep(1, n))~x,dist="weibull")$coef[2]  # -3.137603
survreg(Surv(y,rep(1, n))~x,dist="exponential")$coef[2]  # -3.143095
coxph(Surv(y,rep(1, n))~x)$coef  # 2.717794 

# See the pdf note for the rest of code

 \lambda = exp(-intercept)
> futime <- rexp(1000, 5)
> survreg(Surv(futime,rep(1,1000))~1,dist="exponential")$coef
> exp(1.618263)
[1] 5.044321

\gamma &= 1/scale \\
   \alpha &= exp(-(Intercept)*\gamma) 
> survreg(Surv(futime,rep(1,1000))~1,dist="weibull")
survreg(formula = Surv(futime, rep(1, 1000)) ~ 1, dist = "weibull")


Scale= 1.048049 

Loglik(model)= 620.1   Loglik(intercept only)= 620.1
n= 1000

h(t|x) = 1/scale = \frac{1}{\lambda/e^{\beta 'x}} = \frac{e^{\beta ' x}}{\lambda} = h_0(t) \exp(\beta' x)

n = 10000
beta1 = 2; beta2 = -1
lambdaT = .002 # baseline hazard
lambdaC = .004  # hazard of censoring
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) 
# No censoring
event2 <- rep(1, length(T))
coxph(Surv(T, event2)~ x1 + x2)
#       coef exp(coef) se(coef)     z      p
# x1  1.9982    7.3761   0.0188 106.1 <2e-16
# x2 -1.0020    0.3671   0.0127 -79.1 <2e-16
# Likelihood ratio test=15556  on 2 df, p=0
# n= 10000, number of events= 10000 

# Censoring
C = rweibull(n, shape=1, scale=lambdaC)   #censoring time
time = pmin(T,C)  #observed time is min of censored and true
event = time==T   # set to 1 if event is observed
coxph(Surv(time, event)~ x1 + x2)
#       coef exp(coef) se(coef)     z      p
# x1  2.0104    7.4662   0.0225  89.3 <2e-16
# x2 -0.9921    0.3708   0.0155 -63.9 <2e-16
# Likelihood ratio test=11321  on 2 df, p=0
# n= 10000, number of events= 6002

Predefined censoring rates

Simulating survival data with predefined censoring rates for proportional hazards models

Cross validation

  • Cross validation in survival analysis by Verweij & van Houwelingen, Stat in medicine 1993.
  • Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data. Simon et al, Brief Bioinform. 2011

Survival rate

  • Disease-free survival (DFS): the period after curative treatment [disease eliminated] when no disease can be detected
  • Progression-free survival (PFS), overall survival (OS). PFS is the length of time during and after the treatment of a disease, such as cancer, that a patient lives with the disease but it does not get worse. See an use at NCI-MATCH trial.
  • Time to progression: The length of time from the date of diagnosis or the start of treatment for a disease until the disease starts to get worse or spread to other parts of the body. In a clinical trial, measuring the time to progression is one way to see how well a new treatment works. Also called TTP.
  • Metastasis-free survival (MFS) time: the period until metastasis is detected
  • Understanding Statistics Used to Guide Prognosis and Evaluate Treatment (DFS & PFS rate)


HER2-positive breast cancer

Cox Regression

Let Yi denote the observed time (either censoring time or event time) for subject i, and let Ci be the indicator that the time corresponds to an event (i.e. if Ci = 1 the event occurred and if Ci = 0 the time is a censoring time). The hazard function for the Cox proportional hazard model has the form

\lambda(t|X) = \lambda_0(t)\exp(\beta_1X_1 + \cdots + \beta_pX_p) = \lambda_0(t)\exp(X \beta^\prime).

This expression gives the hazard at time t for an individual with covariate vector (explanatory variables) X. Based on this hazard function, a partial likelihood (defined on hazard function) can be constructed from the datasets as

L(\beta) = \prod_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j},

where θj = exp(Xj β) and X1, ..., Xn are the covariate vectors for the n independently sampled individuals in the dataset (treated here as column vectors). This pdf or this note give a toy example

The corresponding log partial likelihood is

\ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right).

This function can be maximized over β to produce maximum partial likelihood estimates of the model parameters.

The partial score function is 
\ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right),

and the Hessian matrix of the partial log likelihood is

\ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right).

Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients.

If X is age, then the coefficient is likely >0. If X is some treatment, then the coefficient is likely <0.

Compare the partial likelihood to the full likelihood

Partial likelihood when there are ties

In R's coxph(): Nearly all Cox regression programs use the Breslow method by default, but not this one. The Efron approximation is used as the default here, it is more accurate when dealing with tied death times, and is as efficient computationally.

Hazard (function) and survival function

A hazard is the rate at which events happen, so that the probability of an event happening in a short time interval is the length of time multiplied by the hazard.

h(t) = \lim_{\Delta t \to 0} \frac{P(t \leq T < t+\Delta t|T \geq t)}{\Delta t} = \frac{f(t)}{S(t)} = -\partial{ln[S(t)]}


H(x) = \int_0^x h(u) d(u) = -ln[S(x)].


S(x) = e^{-H(x)}

Hazards (or probability of hazards) may vary with time, while the assumption in proportional hazard models for survival is that the hazard is a constant proportion.


  • If h(t)=c, S(t) is exponential. f(t) = c exp(-ct). The mean is 1/c.
  • If \log h(t) = c + \rho t, S(t) is Gompertz distribution.
  • If \log h(t)=c + \rho \log (t), S(t) is Weibull distribution.
  • For Cox regression, the survival function can be shown to be S(t|X) = S_0(t) ^ {\exp(X\beta)}.

S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
  &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
  &= e^{-\int_0^t h_0(u) du \cdot exp(X \beta)} \\
  &= S_0(t)^{exp(X \beta)}


S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
  &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
  &= e^{-H_0(t) \cdot exp(X \beta)} 

where the cumulative baseline hazard at time t, H_0(t), is commonly estimated through the non-parametric Breslow estimator.

Check the proportional hazard (constant HR over time) assumption by cox.zph()

Extract p-values

fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)

# method 1:
beta <- coef(fit)
se <- sqrt(diag(vcov(fit)))
1 - pchisq((beta/se)^2, 1)

# method 2:
coef(summary(fit))[, "Pr(>|z|)"]

Expectation of life & expected future lifetime

  • The average lifetime is the same as the area under the survival curve.

\mu &= \int_0^\infty t f(t) dt \\
  &= \int_0^\infty S(t) dt

by integrating by parts making use of the fact that -f(t) is the derivative of S(t), which has limits S(0)=1 and S(\infty)=0. The average lifetime may not be bounded if you have censored data, there's censored observations that last beyond your last recorded death.

\frac{1}{S(t_0)} \int_0^{\infty} t\,f(t_0+t)\,dt = \frac{1}{S(t_0)} \int_{t_0}^{\infty} S(t)\,dt,

Hazard Ratio/Relative Risk

A hazard ratio is often reported as a “reduction in risk of death or progression” – This risk reduction is calculated as 1 minus the Hazard Ratio (exp^beta), e.g., HR of 0.84 is equal to a 16% reduction in risk. See and

Another example (John Fox) is assuming Y ~ age + prio + others.

  • If exp(beta_age) = 0.944. It means an additional year of age reduces the hazard by a factor of .944 on average, or (1-.944)*100 = 5.6 percent.
  • If exp(beta_prio) = 1.096, it means each prior conviction increases the hazard by a factor of 1.096, or 9.6 percent.

Hazard ratio is not the same as the relative risk ratio. See

Interpreting risks and ratios in therapy trials from is useful too.

For two groups that differ only in treatment condition, the ratio of the hazard functions is given by e^\beta, where \beta is the estimate of treatment effect derived from the regression model. See here.

Compute ratio ratios from coxph() in R (Hint: exp(beta)).

Prognostic index is defined on

Basics of the Cox proportional hazards model. Good prognostic factor (b<0 or HR<1) and bad prognostic factor (b>0 or HR>1).

Variable selection: variables were retained in the prediction models if they had a hazard ratio of <0.85 or >1.15 (for binary variables) and were statistically significant at the 0.01 level. see Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study.

Hazard Ratio and death probability

Suppose S0(t)=.2 (20% survived at time t) and the hazard ratio (hr) is 2 (a group has twice the chance of dying than a comparison group), then (Cox model is assumed)

  1. S1(t)=S0(t)hr = .22 = .04 (4% survived at t)
  2. The corresponding death probabilities are 0.8 and 0.96.
  3. If a subject is exposed to twice the risk of a reference subject at every age, then the probability that the subject will be alive at any given age is the square of the probability that the reference subject (covariates = 0) would be alive at the same age. See p10 of this lecture notes.
  4. exp(x*beta) is the relative risk associated with covariate value x.

Hazard Ratio Forest Plot

The forest plot quickly summarizes the hazard ratio data across multiple variables –If the line crosses the 1.0 value, the hazard ratio is not significant and there is no clear advantage for either arm.

Estimate baseline hazard h_0(t), cumulative baseline hazard H_0(t), baseline survival function S_0(t) and the survival function S(t)

Google: how to estimate baseline hazard rate


because the (Breslow) hazard estimator for a Cox model reduces to the Nelson-Aalen estimator when there are no covariates. You can also compute it from information returned by survfit().

fit <- survfit(Surv(time, status) ~ 1, data = aml)
cumsum(fit$n.event/fit$n.risk) # the Nelson-Aalen estimator for the times given by fit$times
-log(fit$surv)   # cumulative hazard

Manually compute

Breslow estimator of the baseline cumulative hazard rate reduces to the Nelson-Aalen estimator when there are no covariates present; see p283 of Klein 2003.

\hat{H}_0(t) &= \sum_{t_i \le t} \frac{d_i}{W(t_i;b)}, \\
W(t_i;b) &= \sum_{j \in R(t_i)} \exp(\sum_{k=1}^p b_k z_{jk})

where  t_1 < t_2 < \cdots < t_D denotes the distinct death times and d_i be the number of deaths at time t_i. The estimator of the baseline survival function S_0(t) = \exp [-H_0(t)] is given by \hat{S}_0(t) = \exp [-\hat{H}_0(t)]. Below we use the formula to compute the cumulative hazard (and survival function) and compare them with the result obtained using R's built-in functions. The following code is a modification of the snippet from the post Breslow cumulative hazard and basehaz().

bhaz <- function(beta, time, status, x) {
  # time can be duplicated
  # x (covariate) must be continuous
  data <- data.frame(time,status,x)
  data <- data[order(data$time), ]
  dt   <- unique(data$time)
  k    <- length(dt)
  risk <- exp(data.matrix(data[,-c(1:2)]) %*% beta)
  h    <- rep(0,k)
  for(i in 1:k) {
    h[i] <- sum(data$status[data$time==dt[i]]) / sum(risk[data$time>=dt[i]])          
  return(data.frame(h, dt))

# Example 1 'ovarian' which has unique survival time
all(table(ovarian$futime) == 1) # TRUE

fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
# 1. compute the cumulative baseline hazard 
# 1.1 manually using Breslow estimator at the observed time
h0 <- bhaz(fit$coef, ovarian$futime, ovarian$fustat, ovarian$age)
H0 <- cumsum(h0$h)
# 1.2 use basehaz (always compute at the observed time)
# since we consider the baseline, we need to use centered=FALSE <- basehaz(fit, centered = FALSE)
cbind(H0, h0$dt,
range(abs(H0 -$hazard)) # [1] 6.352747e-22 5.421011e-20

# 2. compute the estimation of the survival function
# 2.1 manually using Breslow estimator at t = observed time (one dim, sorted) 
#     and observed age (another dim):
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
S1 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
dim(S1) # row = times, col = age
# 2.2 How about considering times not at observed (e.g. h0$dt - 10)?
# Let's look at the hazard rate
newtime <- h0$dt - 10
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
dim(S2) # row = newtime, col = age

# 2.3 use summary() and survfit() to obtain the estimation of the survival
S3 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = h0$dt)$surv
dim(S3)  # row = times, col = age
range(abs(S1 - S3)) # [1] 2.117244e-17 6.544321e-12
# 2.4 How about considering times not at observed (e.g. h0$dt - 10)?
# Note that we cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000e+00 6.544321e-12
# Example 2 'kidney' which has duplicated time
fit <- coxph(Surv(time, status) ~ age, data = kidney)
# manually compute the breslow cumulative baseline hazard
#   at the observed time
h0 <- with(kidney, bhaz(fit$coef, time, status, age))
H0 <- cumsum(h0$h)
# use basehaz (always compute at the observed time)
# since we consider the baseline, we need to use centered=FALSE <- basehaz(fit, centered = FALSE)
head(cbind(H0, h0$dt,
range(abs(H0 -$hazard)) # [1] 0.000000000 0.005775414

# manually compute the estimation of the survival function
# at t = observed time (one dim, sorted) and observed age (another dim):
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
S1 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
dim(S1) # row = times, col = age
# How about considering times not at observed (h0$dt - 1)?
# Let's look at the hazard rate
newtime <- h0$dt - 1
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
dim(S2) # row = newtime, col = age

# use summary() and survfit() to obtain the estimation of the survival
S3 <- summary(survfit(fit, data.frame(age = kidney$age)), times = h0$dt)$surv
dim(S3)  # row = times, col = age
range(abs(S1 - S3)) # [1] 0.000000000 0.002783715
# How about considering times not at observed (h0$dt - 1)?
# We cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = kidney$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000000 0.002783715
  • basehaz() (an alias for survfit) from and here. basehaz() has a parameter centered which by default is TRUE. Actually basehaz() gives cumulative hazard H(t). See here. That is, exp(-basehaz(fit)$hazard) is the same as summary(survfit(fit))$surv. basehaz() function is not useful.
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian) 
> fit
coxph(formula = Surv(futime, fustat) ~ age, data = ovarian)

      coef exp(coef) se(coef)    z      p
age 0.1616    1.1754   0.0497 3.25 0.0012

Likelihood ratio test=14.3  on 1 df, p=0.000156
n= 26, number of events= 12 

# Note the default 'centered = TRUE' for basehaz() 
> exp(-basehaz(fit)$hazard) # exp(-cumulative hazard)
 [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
 [8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
> dim(ovarian)
[1] 26  6
> exp(-basehaz(fit)$hazard)[ovarian$fustat == 1]
 [1] 0.9880206 0.9738738 0.9545899 0.8973620 0.8243117 0.8243117 0.7750981
 [8] 0.7750981 0.5204807 0.5204807 0.5204807 0.5204807
> summary(survfit(fit))$surv 
 [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
 [8] 0.7750981 0.7244924 0.6734146 0.5962187 0.5204807
> summary(survfit(fit, data.frame(age=mean(ovarian$age))), 
          time=ovarian$futime[ovarian$fustat == 1])$surv
# Same result as above
> summary(survfit(fit, data.frame(age=mean(ovarian$age))), 
 [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
 [8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807

Predicted survival probability in Cox model: survfit.coxph(), plot.survfit() & summary.survfit( , times)

For theory, see section 8.6 Estimation of the survival function in Klein & Moeschberger.

For R, see Extract survival probabilities in Survfit by groups

plot.survfit(). fun="log" to plot log survival curve, fun="event" plot cumulative events, fun="cumhaz" plots cumulative hazard (f(y) = -log(y)).

The plot function below will draw 4 curves: S_0(t)^{\exp(\hat{\beta}_{age}*60)}, S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageII})}, S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIII})}, S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIV})}.

library(KMsurv) # Data package for Klein & Moeschberge
larynx$stage <- factor(larynx$stage)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)

# Figure 8.3 from Section 8.6
plot(survfit(coxobj, newdata = data.frame(age=rep(60, 4), stage=factor(1:4))), lty = 1:4)

# Estimated probability for a 60-year old for different stage patients
out <- summary(survfit(coxobj, data.frame(age = rep(60, 4), stage=factor(1:4))), times = 5)
#  time n.risk n.event survival1 survival2 survival3 survival4
#    5     34      40     0.702     0.665      0.51     0.142
sum(larynx$time >=5) # n.risk
# [1] 34
sum(larynx$delta[larynx$time <=5]) # n.event
# [1] 40
sum(larynx$time >5) # Wrong
# [1] 31
sum(larynx$delta[larynx$time <5]) # Wrong
# [1] 39

# 95% confidence interval
# 0.8629482 0.9102532 0.7352413 0.548579
# 0.5707952 0.4864903 0.3539527 0.03691768

We need to pay attention when the number of covariates is large (and we don't want to specify each covariates in the formula). The key is to create a data frame and use dot (.) in the formula. This is to fix a warning message: 'newdata' had XXX rows but variables found have YYY rows from running survfit(, newdata).

Another way is to use as.formula() if we don't want to create a new object.

xsub <- data.frame(xtrain)
colnames(xsub) <- paste0("x", 1:ncol(xsub))

coxobj <- coxph(Surv(ytrain[, "time"], ytrain[, "status"]) ~ ., data = xsub)

newdata <- data.frame(xtest)
colnames(newdata) <- paste0("x", 1:ncol(newdata))

survprob <- summary(survfit(coxobj, newdata=newdata), 
                    times = 5)$surv[1, ]  
# since there is only 1 time point, we select the first row in surv (surv is a matrix with one row).

Predicted survival probabilities from glmnet: c060/peperr, biospear packages and manual computation

## S3 method for class 'glmnet'
predictProb(object, response, x, times, complexity, ...)

junk <- biospear::simdata(n=500, p=500, q.main = 10, q.inter = 0, 
         = .5, m0=1,, 
                  beta.main= -.5, b.corr = .7,, 
                  wei.shape = 1, recr=3, fu=2, timefactor=1)
library(c060) # Error: object 'predictProb' not found

y <- cbind(time=junk$time, status=junk$status)
x <- cbind(1, junk[, "treat", drop = FALSE])
names(x) <- c("inter", "treat")
x <- as.matrix(x)
cvfit <- cv.glmnet(x, y, family = "cox")
obj <- glmnet(x, y, family = "cox")
xnew <- matrix(c(0,0), nr=1)
predictProb(obj, y, xnew, times=1, complexity = cvfit$lambda.min)
# Error in exp(lp[response[, 1] >= t.unique[i]]) : 
#   non-numeric argument to mathematical function
# In addition: Warning message:
# In : applied to non-(list or vector) of type 'NULL'
expSurv(res, traindata, method, ci.level = .95, boot = FALSE, nboot, smooth = TRUE, = 4, time, trace = TRUE, ncores = 1)
# S3 method for resexpSurv
predict(object, newdata, ...)
# continue the example
# BMsel() takes a little while
resBM <- biospear::BMsel(
    data = junk, 
    method = "lasso", 
    inter = FALSE, 
    folds = 5)
# Note: if we specify time =5 in expsurv(), we will get an error message
# 'time' is out of the range of the observed survival time.
# Note: if we try to specify more than 1 time point, we will get the following msg
# 'time' must be an unique value; no two values are allowed.
esurv <- biospear::expSurv(
    res = resBM,
    traindata = junk,
    boot = FALSE,
    time = median(junk$time),
    trace = TRUE)
# debug(biospear:::plot.resexpSurv)
plot(esurv, method = "lasso")
# This is equivalent to doing the following
xx <- attributes(esurv)$m.score[, "lasso"]
wc <- order(xx); wgr <- 1:nrow(esurv$surv)
p1 <- plot(x = xx[wc], y = esurv$surv[wgr, "lasso"][wc], 
           xlab='prognostic score', ylab='survival prob')
# prognostic score beta*x in this cases.
# ignore treatment effect and interactions
xxmy <- as.vector(as.matrix(junk[, names(resBM$lasso)]) %*% resBM$lasso)
# See page4 of the paper. Scaled scores were used in the plot
range(abs(xx - (xxmy-quantile(xxmy, .025)) / (quantile(xxmy, .975) -  quantile(xxmy, .025))))
# [1] 1.500431e-09 1.465241e-06

h0 <- bhaz(resBM$lasso, junk$time, junk$status, junk[, names(resBM$lasso)])
newtime <- median(junk$time)
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
# newx <- junk[ , names(resBM$lasso)]
# Compute the estimate of the survival probability at training x and time = median(junk$time)
# using Breslow method
S2 <- outer(exp(-H0),  exp(xxmy), "^") # row = newtime, col = newx
range(abs(esurv$surv[wgr, "lasso"] - S2))
# [1] 6.455479e-18 2.459136e-06
# My implementation of the prognostic plot
#   Note that the x-axis on the plot is based on prognostic scores beta*x, 
#   not on treatment modifying scores gamma*x as described in the paper.
#   Maybe it is because inter = FALSE in BMsel() we have used.
p2 <- plot(xxmy[wc], S2[wc], xlab='prognostic score', ylab='survival prob')  # cf p1

> names(esurv)
[1] "surv"  "lower" "upper"
> str(esurv$surv)
 num [1:500, 1:2] 0.772 0.886 0.961 0.731 0.749 ...
 - attr(*, "dimnames")=List of 2
  ..$ : NULL
  ..$ : chr [1:2] "lasso" "oracle"

esurv2 <- predict(esurv, newdata = junk)
esurv2$surv       # All zeros?

Bug from the sample data (interaction was considered here; inter = TRUE) ?

resBM <-  BMsel(
  data = Breast,
  x = 4:ncol(Breast),
  y = 2:1,
  tt = 3,
  inter = TRUE,
  std.x = TRUE,
  folds = 5,
  method = c("lasso", "lasso-pcvl"))

esurv <- expSurv(
  res = resBM,
  traindata = Breast,
  boot = FALSE,
  smooth = TRUE,
  time = 4,
  trace = TRUE
Computation of the expected survival
Computation of analytical confidence intervals
Computation of smoothed B-splines
Error in cobs(x = x, y = y, print.mesg = F, print.warn = F, method = "uniform",  : 
  There is at least one pair of adjacent knots that contains no observation.


  • fit$loglik is a vector of length 2 (Null model, fitted model)
  • Use survival::anova() command to do a likelihood ratio test. Note this function does not work on glmnet object.
  • residuals.coxph Calculates martingale, deviance, score or Schoenfeld residuals for a Cox proportional hazards model.
  • No deviance() on coxph object!
  • logLik() function will return fit$loglik[2]


\mathrm{AIC} &= 2k - 2\ln(\hat L) \\
\mathrm{AICc} &= \mathrm{AIC} + \frac{2k^2 + 2k}{n - k - 1}
fit <- glmnet(x, y, family = "multinomial") 

tLL <- fit$nulldev - deviance(fit) # ln(L)
k <- fit$df
n <- fit$nobs
AICc <- -tLL+2*k+2*k*(k+1)/(n-k-1)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
coxobj <- coxph(Surv(ty, 1-tcens) ~ x)
coxobj_small <- coxph(Surv(ty, 1-tcens) ~1)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(ty, 1 - tcens)
# Model 1: ~ x
# Model 2: ~ 1
# loglik  Chisq Df P(>|Chi|)  
# 1 -4095.2                      
# 2 -4102.7 15.151  6   0.01911 *

fit2 <- glmnet(x,y,family="cox", lambda=0) # ridge regression
# [1] 8190.313
# [1] 6
fit2$nulldev - deviance(fit2) # Log-Likelihood ratio statistic
# [1] 15.15097
1-pchisq(fit2$nulldev - deviance(fit2), fit2$df)
# [1] 0.01911446

Here is another example including a factor covariate:

library(KMsurv) # Data package for Klein & Moeschberge
larynx$stage <- factor(larynx$stage)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)
# age    stage2    stage3    stage4 
# 0.0190311 0.1400402 0.6423817 1.7059796
coxobj_small <- coxph(Surv(time, delta)~age, data = larynx)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(time, delta)
# Model 1: ~ age + stage
# Model 2: ~ age
# loglik  Chisq Df P(>|Chi|)   
# 1 -187.71                       
# 2 -195.55 15.681  3  0.001318 **
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

# Now let's look at the glmnet() function.
# It seems glmnet does not recognize factor covariates.
coxobj2 <- with(larynx, glmnet(cbind(age, stage), Surv(time, delta), family = "cox", lambda=0))
coxobj2$nulldev - deviance(coxobj2)  # Log-Likelihood ratio statistic
# [1] 15.72596
coxobj1 <- with(larynx, glmnet(cbind(1, age), Surv(time, delta), family = "cox", lambda=0))
deviance(coxobj1) - deviance(coxobj2) 
# [1] 13.08457
1-pchisq(deviance(coxobj1) - deviance(coxobj2) , coxobj2$df-coxobj1$df)
# [1] 0.0002977376

glmnet + Cox models

Error in glmnet: x should be a matrix with 2 or more columns

Error in coxnet: (list) object cannot be coerced to type 'double'

Fix: do not use data.frame in X. Use cbind() instead.

Prognostic index/risk scores

linear.predictors component in coxph object

The $linear.predictors component is not \beta' x. It is \beta' (x-\mu_x). See this post.

predict.coxph, prognostic index & risk score

  • predict.coxph() Compute fitted values and regression terms for a model fitted by coxph. The Cox model is a relative risk model; predictions of type "linear predictor", "risk", and "terms" are all relative to the sample from which they came. By default, the reference value for each of these is the mean covariate within strata. The primary underlying reason is statistical: a Cox model only predicts relative risks between pairs of subjects within the same strata, and hence the addition of a constant to any covariate, either overall or only within a particular stratum, has no effect on the fitted results. Returned value: a vector or matrix of predictions, or a list containing the predictions (element "fit") and their standard errors (element "") if the option is TRUE.
predict(object, newdata,
    type=c("lp", "risk", "expected", "terms", "survival"),, na.action=na.pass, terms=names(object$assign), collapse,
    reference=c("strata", "sample"),  ...)
fit <- coxph(Surv(time, status) ~ age , lung)
#  Call:
#  coxph(formula = Surv(time, status) ~ age, data = lung)
#       coef exp(coef) se(coef)    z     p
# age 0.0187      1.02   0.0092 2.03 0.042
# Likelihood ratio test=4.24  on 1 df, p=0.0395  n= 228, number of events= 165 
#      age 
# 62.44737 

# type = "lr" (Linear predictor)
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518  0.21626733
# [7]  0.10394626  0.16010680 -0.17685643 -0.02709500
0.0187 * (lung$age[1:10] - fit$means)
# [1]  0.21603421  0.10383421 -0.12056579 -0.10186579 -0.04576579  0.21603421
# [7]  0.10383421  0.15993421 -0.17666579 -0.02706579
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518
# [6]  0.21626733  0.10394626  0.16010680 -0.17685643 -0.02709500

# type = "risk" (Risk score)
> as.numeric(predict(fit,type="risk"))[1:10]
 [1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342 1.1095408
 [8] 1.1736362 0.8379001 0.9732688
> exp((lung$age-mean(lung$age)) * 0.0187)[1:10]
 [1] 1.2411448 1.1094165 0.8864188 0.9031508 0.9552657 1.2411448
 [7] 1.1094165 1.1734337 0.8380598 0.9732972
> exp(fit$linear.predictors)[1:10]
 [1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342
 [7] 1.1095408 1.1736362 0.8379001 0.9732688

Survival risk prediction

Assessing the performance of prediction models

Hazard ratio


hazard.ratio(x, surv.time, surv.event, weights, strat, alpha = 0.05, 
             method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)

D index


D.index(x, surv.time, surv.event, weights, strat, alpha = 0.05, 
        method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)

Concordance index (C-index)

  • The area under ROC curve (plot of sensitivity of 1-specificity) is also called C-statistic. It is a measure of discrimination generalized for survival data (Harrell 1982 & 2001). The ROC are functions of the sensitivity and specificity for each value of the measure of model. (Nancy Cook, 2007)
    • The sensitivity of a test is the probability of a positive test result, or of a value above a threshold, among those with disease (cases).
    • The specificity of a test is the probability of a negative test result, or of a value below a threshold, among those without disease (noncases).
    • Perfect discrimination corresponds to a c-statistic of 1 & is achieved if the scores for all the cases are higher than those for all the non-cases.
    • The c-statistic is the probability that the measure or predicted risk is higher for a case than for a noncase.
    • The c-statistic is not the probability that individuals are classified correctly or that a person with a high test score will eventually become a case.
    • C-statistic is a rank-based measure. The c-statistic describes how well models can rank order cases and noncases, but not a function of the actual predicted probabilities.
  • How to interpret the output for calculating concordance index (c-index)? 
P(\beta' Z_1 > \beta' Z_2|T_1 < T_2)
where T is the survival time and Z is the covariates.
    • It is the fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model.
    • High values mean that your model predicts higher probabilities of survival for higher observed survival times.
    • The c index estimates the probability of concordance between predicted and observed responses. A value of 0.5 indicates no predictive discrimination and a value of 1.0 indicates perfect separation of patients with different outcomes. (p371 Harrell 1996)
  • Drawback of C-statistics:
    • Even though rank indexes such as c are widely applicable and easily interpretable, they are not sensitive for detecting small differences in discrimination ability between two models. This is due to the fact that a rank method considers the (prediction, outcome) pairs (0.01,0), (0.9, 1) as no more concordant than the pairs (0.05,0), (0.8, 1). A more sensitive likelihood-ratio Chi-square-based statistic that reduces to R2 in the linear regression case may be substituted. (p371 Harrell 1996)
    • If the model is correct, the likelihood based measures may be more sensitive in detecting differences in prediction ability, compared to rank-based measures such as C-indexes. (Uno 2011 p 1113)
  • survcomp package
  • concordance.index()
  • Hmisc package. rcorr.cens().

See also 5 Ways to Estimate Concordance Index for Cox Models in R, Why Results Aren't Identical?

Integrated brier score

Assessment and comparison of prognostic classification schemes for survival data Graf et al Stat. Med. 1999 2529-45

  • Because the point predictions of event-free times will almost inevitably given inaccurate and unsatisfactory result, the mean square error of prediction \frac{1}{n}\sum_1^n (T_i - \hat{T}(X_i))^2 method will not be considered.
  • Another approach is to predict the survival or event status Y=I(T > \tau) at a fixed time point \tau for a patient with X=x. This leads to the expected Brier score E[(Y - \hat{S}(\tau|X))^2] where \hat{S}(\tau|X) is the estimated event-free probabilities (survival probability) at time \tau for subject with predictor variable X.
  • The time-dependent Brier score (without censoring)

  \mbox{Brier}(\tau) &= \frac{1}{n}\sum_1^n (I(T_i>\tau) - \hat{S}(\tau|X_i))^2   
  • The time-dependent Brier score (with censoring, C is the censoring variable)

  \mbox{Brier}(\tau) = \frac{1}{n}\sum_i^n\bigg[\frac{(\hat{S}_C(t_i))^2I(t_i \leq \tau, \delta_i=1)}{\hat{S}_C(t_i)} + \frac{(1 - \hat{S}_C(t_i))^2 I(t_i > \tau)}{\hat{S}_C(\tau)}\bigg]

where \hat{S}_C(t_i) = P(C > t_i), the Kaplan-Meier estimate of the censoring distribution with t_i the survival time of patient i. The integration of the Brier score can be done by over time t \in [0, \tau] with respect to some weight function W(t) for which a natual choice is (1 - \hat{S}(t))/(1-\hat{S}(\tau)). The lower the iBrier score, the larger the prediction accuracy is.

  • Useful benchmark values for the Brier score are 33%, which corresponds to predicting the risk by a random number drawn from U[0, 1], and 25% which corresponds to predicting 50% risk for everyone. See Evaluating Random Forests for Survival Analysis using Prediction Error Curves by Mogensen et al J. Stat Software 2012 (pec package). The paper has a good summary of different R package implementing Brier scores.

R function

Papers on high dimensional covariates

  • Assessment of survival prediction models based on microarray data, Bioinformatics , 2007, vol. 23 (pg. 1768-74)
  • Allowing for mandatory covariates in boosting estimation of sparse high-dimensional survival models, BMC Bioinformatics , 2008, vol. 9 pg. 14


  • C-statistics is the probability of concordance between predicted and observed survival.
  • Harrell’s Concordance. In SAS, the s.e. of the Harrell's C-statistics is estimated by the delta method as described in Comparing two correlated C indices with right‐censored survival outcome: a one‐shot nonparametric approach. 
C_H = \frac{\sum_{i,j}I(t_i < t_{j}) I(\hat{\beta} Z_i > \hat{\beta} Z_j) \delta_i}{\sum_{i,j} I(t_i < t_j) \delta_i}
which converges to a censoring-dependent quantity  P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \text{min}(D_1,D_2)). Here D is the censoring variable.
  • On the C-statistics for Evaluating Overall Adequacy of Risk Prediction Procedures with Censored Survival Data by Uno et al 2011. Let \tau be a specified time point within the support of the censoring variable. 
C(\tau) = \text{UnoC}(\hat{\pi}, \tau) = \frac{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\pi}_i > \hat{\pi}_{i'}) \delta_i}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) \delta_i}
, a measure of the concordance between \hat{\pi}_i = \hat{\beta} Z_i (the linear predictor) and the survival time. \hat{S}_C(t) is the Kaplan-Meier estimator for the censoring distribution/variable/time; flipping the definition of \delta_i/considering failure events as "censored" observations and censored observations as "failures" and computing the KM as usual; see p207 of Satten 2001 and the source code from the kmcens() in survC1. Note that C_\tau converges to  P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \tau).
    • real data 1: fit a Cox model. Get risk scores \hat{\beta}'Z. Compute the point and confidence interval estimates (M=500 indep. random samples with the same sample size as the observation data) of C_\tau for different \tau. Compare them with the conventional C-index procedure (Korn).
    • real data 1: compute C_\tau for a full model and a reduce model. Compute the difference of them (C_\tau^{(A)} - C_\tau^{(B)} = .01) and the 95% confidence interval (-0.00, .02) of the difference for testing the importance of some variable (HDL in this case). Though HDL is quite significant (p=0) with respect to the risk of CV disease but its incremental value evaluated via C-statistics is quite modest.
    • real data 2: goal - evaluate the prognostic value of a new gene signature in predicting the time to death or metastasis for breast cancer patients. Two models were fitted; one with age+ER and the other is gene+age+ER. For each model we can calculate the point and interval estimates of C_\tau for different \taus.
    • simulation: T is from Weibull regression for case 1 and log-normal regression for case 2. Covariates = (age, ER, gene). 3 kinds of censoring were considered. Sample size is 100, 150, 200 and 300. 1000 iterations. Compute coverage probabilities and average length of 95% confidence intervals, bias and root mean square error for \tau equals to 10 and 15. Compared with the conventional approach, the new method has higher coverage probabilities and less bias in 6 scenarios.
  • Statistical methods for the assessment of prognostic biomarkers (Part I): Discrimination by Tripep et al 2010
  • Assessment of Discrimination in Survival Analysis (C-statistics, etc) by anonymous
  • Uno's C-statistics
    • It seems C-statistics is a decreasing function of tau.
    • (From ?UnoC) Uno's estimator is based on inverse-probability-of-censoring weights and does not assume a specific working model for deriving the predictor lpnew. It is assumed, however, that there is a one-to-one relationship between the predictor and the expected survival times conditional on the predictor. Note that the estimator implemented in UnoC is restricted to situations where the random censoring assumption holds.
    • UnoC() from the survAUC package. It can take new data. The tau parameter: Truncation time. The resulting C tells how well the given prediction model works in predicting events that occur in the time range from 0 to tau. Con: no confidence interval estimate for C_\tau nor C_\tau^{(A)} - C_\tau^{(B)}.
    • Est.Cval() from the survC1 package (authored by H. Uno). It doesn't take new data nor the vector of predictors obtained from the test data. Pro: Inf.Cval() can compute the CI of C_\tau & Inf.Cval.Delta() for the difference C_\tau^{(A)} - C_\tau^{(B)}.
# require training and predict sets
TR <- ovarian[1:16,]
TE <- ovarian[17:26,]  <- coxph(Surv(futime, fustat) ~ age, data=TR)

lpnew <- predict(, newdata=TE)
Surv.rsp <- Surv(TR$futime, TR$fustat) <- Surv(TE$futime, TE$fustat)              

UnoC(Surv.rsp, Surv.rsp,$linear.predictors, time=365.25*1) 
# [1] 0.9761905
UnoC(Surv.rsp, Surv.rsp,$linear.predictors, time=365.25*2) 
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp,$linear.predictors, time=365.25*3) 
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp,$linear.predictors, time=365.25*4) 
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp,$linear.predictors, time=365.25*5) 
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp,$linear.predictors)
# [1] 0.7308979
# So the function UnoC() can obtain the exact result as Est.Cval().
# Now try on a new data set. Question: why do we need Surv.rsp?
UnoC(Surv.rsp,, lpnew)
# [1] 0.7333333
UnoC(Surv.rsp,, lpnew, time=365.25*2)
# [1] 0.7333333

# tau is mandatory (>0), no need to have training and predict sets
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*1)$Dhat
# [1] 0.9761905
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*2)$Dhat
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*3)$Dhat
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*4)$Dhat
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*5)$Dhat
# [1] 0.7308979

svg("~/Downloads/c_stat_scatter.svg", width=8, height=5)
plot(TR$futime,$linear.predictors, main="training data", xlab="time", ylab="predictor")
mtext("C=.731 at t=2", 3)
plot(TE$futime, lpnew, main="testing data", xlab="time", ylab="predictor")
mtext("C=.733 at t=2", 3)

C stat scatter.svg

C-statistic vs LRT comparing nested models

1. Binary data

x1 = rnorm(1000)           # some continuous variables 
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))         # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
df = data.frame(y=y,x1=x1,x2=x2)
fit <- glm( y~x1+x2,data=df,family="binomial")
# Estimate Std. Error z value Pr(>|z|)    
# (Intercept)   0.9915     0.1185   8.367   <2e-16 ***
#   x1            2.2731     0.1789  12.709   <2e-16 ***
#   x2            3.1853     0.2157  14.768   <2e-16 ***
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# (Dispersion parameter for binomial family taken to be 1)
# Null deviance: 1355.16  on 999  degrees of freedom
# Residual deviance:  582.93  on 997  degrees of freedom
# AIC: 588.93
#                 2.5 %   97.5 %
# (Intercept) 0.7592637 1.223790
# x1          1.9225261 2.623659
# x2          2.7625861 3.608069

# LRT - likelihood ratio test
fit2 <- glm( y~x1,data=df,family="binomial")
anova.res <- anova(fit2, fit)
# Analysis of Deviance Table
# Model 1: y ~ x1
# Model 2: y ~ x1 + x2
#   Resid. Df Resid. Dev Df Deviance
# 1       998    1186.16            
# 2       997     582.93  1   603.23
1-pchisq( abs(anova.res$Deviance[2]), abs(anova.res$Df[2]))
# [1] 0

# Method 1: use ROC package to compute AUC
markers <- predict(fit, newdata = data.frame(x1, x2), type = "response")
roc1 <- truth=y, data=markers, )
auc <- AUC(roc1); print(auc) # [1] 0.9459085

markers2 <- predict(fit2, newdata = data.frame(x1), type = "response")
roc2 <- truth=y, data=markers2, )
auc2 <- AUC(roc2); print(auc2) # [1] 0.7259098
auc - auc2 # [1] 0.2199987

# Method 2: use pROC package to compute AUC
roc_obj <- pROC::roc(y, markers)
pROC::auc(roc_obj) # Area under the curve: 0.9459

# Method 3: Compute AUC by hand
auc_probability <- function(labels, scores, N=1e7){
  pos <- sample(scores[labels], N, replace=TRUE)
  neg <- sample(scores[!labels], N, replace=TRUE)
  # sum( (1 + sign(pos - neg))/2)/N # does the same thing
  (sum(pos > neg) + sum(pos == neg)/2) / N # give partial credit for ties
auc_probability(as.logical(y), markers) # [1] 0.945964

2. Survival data

range(ovarian$futime) # [1]   59 1227
plot(survfit(Surv(futime, fustat) ~ 1, data = ovarian))

coxph(Surv(futime, fustat) ~ rx + age, data = ovarian)
#        coef exp(coef) se(coef)     z      p
# rx  -0.8040    0.4475   0.6320 -1.27 0.2034
# age  0.1473    1.1587   0.0461  3.19 0.0014
# Likelihood ratio test=15.9  on 2 df, p=0.000355
# n= 26, number of events= 12 

covs0 <- as.matrix(ovarian[, c("rx")])
covs1 <- as.matrix(ovarian[, c("rx", "age")])
Delta=Inf.Cval.Delta(ovarian[, 1:2], covs0, covs1, tau, itr=200)
round(Delta, digits=3)
#          Est    SE Lower95 Upper95
# Model1 0.844 0.119   0.611   1.077
# Model0 0.659 0.148   0.369   0.949
# Delta  0.185 0.197  -0.201   0.572

Time dependent ROC curves


Prognostic markers vs predictive markers


Logistic regression

Simulate binary data from the logistic model

x1 = rnorm(1000)           # some continuous variables 
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))         # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
#now feed it to glm:
df = data.frame(y=y,x1=x1,x2=x2)
glm( y~x1+x2,data=df,family="binomial")

Building a Logistic Regression model from scratch

Odds ratio

Calculate the odds ratio from the coefficient estimates; see this post.

N  <- 100               # generate some data
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N,  30, 8)
X3 <- abs(rnorm(N, 60, 30))
Y  <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)

# dichotomize Y and do logistic regression
Yfac   <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))

exp(cbind(coef(glmFit), confint(glmFit)))

Medical applications

Subgroup analysis

Other related keywords: recursive partitioning, randomized clinical trials (RCT)

Interaction analysis

Statistical Learning



Chapter 8 of the book.

  • Bootstrap mean is approximately a posterior average.
  • Bootstrap aggregation or bagging average: Average the prediction over a collection of bootstrap samples, thereby reducing its variance. The bagging estimate is defined by
\hat{f}_{bag}(x) = \frac{1}{B}\sum_{b=1}^B \hat{f}^{*b}(x).

Where Bagging Might Work Better Than Boosting



AdaBoost.M1 by Freund and Schapire (1997):

The error rate on the training sample is 
\bar{err} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i)),

Sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers G_m(x), m=1,2,\dots,M.

The predictions from all of them are combined through a weighted majority vote to produce the final prediction: 
G(x) = sign[\sum_{m=1}^M \alpha_m G_m(x)].
Here  \alpha_1,\alpha_2,\dots,\alpha_M are computed by the boosting algorithm and weight the contribution of each respective G_m(x). Their effect is to give higher influence to the more accurate classifiers in the sequence.

Dropout regularization

DART: Dropout Regularization in Boosting Ensembles

Gradient descent

Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function (Wikipedia).

The error function from a simple linear regression looks like

Err(m,b) &= \frac{1}{N}\sum_{i=1}^n (y_i - (m x_i + b))^2, \\

We compute the gradient first for each parameters.

\frac{\partial Err}{\partial m} &= \frac{2}{n} \sum_{i=1}^n -x_i(y_i - (m x_i + b)), \\
\frac{\partial Err}{\partial b} &= \frac{2}{n} \sum_{i=1}^n -(y_i - (m x_i + b)) 

The gradient descent algorithm uses an iterative method to update the estimates using a tuning parameter called learning rate.

new_m &= m_current - (learningRate * m_gradient) 
new_b &= b_current - (learningRate * b_gradient) 

After each iteration, derivative is closer to zero. Coding in R for the simple linear regression.

Classification and Regression Trees (CART)

Construction of the tree classifier

  • Node proportion
 p(1|t) + \dots + p(6|t) =1 where p(j|t) define the node proportions (class proportion of class j on node t. Here we assume there are 6 classes.
  • Impurity of node t
i(t) is a nonnegative function \phi of the p(1|t), \dots, p(6|t) such that  \phi(1/6,1/6,\dots,1/6) = maximumm \phi(1,0,\dots,0)=0, \phi(0,1,0,\dots,0)=0, \dots, \phi(0,0,0,0,0,1)=0. That is, the node impurity is largest when all classes are equally mixed together in it, and smallest when the node contains only one class.
  • Gini index of impurity
i(t) = - \sum_{j=1}^6 p(j|t) \log p(j|t).
  • Goodness of the split s on node t
\Delta i(s, t) = i(t) -p_Li(t_L) - p_Ri(t_R). where p_R are the proportion of the cases in t go into the left node t_L and a proportion p_R go into right node t_R.

A tree was grown in the following way: At the root node t_1, a search was made through all candidate splits to find that split s^* which gave the largest decrease in impurity;

\Delta i(s^*, t_1) = \max_{s} \Delta i(s, t_1).
  • Class character of a terminal node was determined by the plurality rule. Specifically, if p(j_0|t)=\max_j p(j|t), then t was designated as a class j_0 terminal node.

R packages

Supervised Classification, Logistic and Multinomial

Variable selection and variable importance plot

Variable selection and cross-validation

Variable selection for mode regression Chen & Zhou, Journal of applied statistics ,June 2017

Neural network

Support vector machine (SVM)

Quadratic Discriminant Analysis (qda), KNN

Machine Learning. Stock Market Data, Part 3: Quadratic Discriminant Analysis and KNN


Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting

Ridge regression

Since L2 norm is used in the regularization, ridge regression is also called L2 regularization.

ridge regression with glmnet

Hoerl and Kennard (1970a, 1970b) introduced ridge regression, which minimizes RSS subject to a constraint \sum|\beta_j|^2 \le t. Note that though ridge regression shrinks the OLS estimator toward 0 and yields a biased estimator \hat{\beta} = (X^TX + \lambda X)^{-1} X^T y where \lambda=\lambda(t), a function of t, the variance is smaller than that of the OLS estimator.

The solution exists if \lambda >0 even if n < p .

Ridge regression (L2 penalty) only shrinks the coefficients. In contrast, Lasso method (L1 penalty) tries to shrink some coefficient estimators to exactly zeros. This can be seen from comparing the coefficient path plot from both methods.

Geometrically (contour plot of the cost function), the L1 penalty (the sum of absolute values of coefficients) will incur a probability of some zero coefficients (i.e. some coefficient hitting the corner of a diamond shape in the 2D case). For example, in the 2D case (X-axis=\beta_0, Y-axis=\beta_1), the shape of the L1 penalty |\beta_0| + |\beta_1| is a diamond shape whereas the shape of the L2 penalty (\beta_0^2 + \beta_1^2) is a circle.

Lasso/glmnet and FAQs

  • It has a discussion when two covariates are highly correlated. For example if gene i and gene j are identical, then the values of \beta _{j} and \beta _{k} that minimize the lasso objective function are not uniquely determined. Elastic Net has been designed to address this shortcoming.
    • Strongly correlated covariates have similar regression coefficients, is referred to as the grouping effect. From the wikipedia page "one would like to find all the associated covariates, rather than selecting only one from each set of strongly correlated covariates, as lasso often does. In addition, selecting only a single covariate from each group will typically result in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso)".
  • Glmnet Vignette. It tries to minimize RSS(\beta) + \lambda [(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1] . The elastic-net penalty is controlled by \alpha, and bridge the gap between lasso (\alpha = 1) and ridge (\alpha = 0). Following is a CV curve (adaptive lasso) using the example from glmnet(). Two vertical lines are indicated: left one is lambda.min (that gives minimum mean cross-validated error) and right one is lambda.1se (the most regularized model such that error is within one standard error of the minimum). For the tuning parameter \lambda,
    • The larger the \lambda, more coefficients are becoming zeros (think about coefficient path plots) and thus the simpler (more regularized) the model.
    • If \lambda becomes zero, it reduces to the regular regression and if \lambda becomes infinity, the coefficients become zeros.
    • In terms of the bias-variance tradeoff, the larger the \lambda, the higher the bias and the lower the variance of the coefficient estimators.


fx= x[,seq(nzc)] %*% beta

## Full lasso
cv.full <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE)
log(cv.full$lambda.min) # -4.546394
log(cv.full$lambda.1se) # -3.61605
sum(coef(cv.full, s=cv.full$lambda.min) != 0) # 44

## Ridge Regression to create the Adaptive Weights Vector
cv.ridge <- cv.glmnet(x, ly, family='binomial', alpha=0, parallel=TRUE)
wt <- 1/abs(matrix(coef(cv.ridge, s=cv.ridge$lambda.min)
                   [, 1][2:(ncol(x)+1)] ))^1 ## Using gamma = 1, exclude intercept
## Adaptive Lasso using the 'penalty.factor' argument
cv.lasso <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE, penalty.factor=wt)
# defautl type.measure="deviance" for logistic regression
log(cv.lasso$lambda.min) # -2.995375
log(cv.lasso$lambda.1se) # -0.7625655
sum(coef(cv.lasso, s=cv.lasso$lambda.min) != 0) # 34

How to solve lasso/convex optimization

Quadratic linear constraint

The quadratic constraint linear programming problem was used in this paper Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects where the algorithm from Lee 2016 was used.

Imbalanced Classification

Deep Learning

Tensor Flow (tensorflow package)

Biological applications

Machine learning resources

These Machine Learning Courses Will Prepare a Career Path for You


Cross Validation

R packages:

.632 bootstrap

What is the .632 bootstrap?

Create partitions

set.seed(), sample.split(),createDataPartition(), and createFolds() functions.

k-fold cross validation with modelr and broom

Nested resampling

Nested resampling is need when we want to tuning a model by using a grid search. The default settings of a model are likely not optimal for each data set out. So an inner CV has to be performed with the aim to find the best parameter set of a learner for each fold.

See a diagram at

In BRB-ArrayTools -> class prediction with multiple methods, the alpha (significant level of threshold used for gene selection, 2nd option in individual genes) can be viewed as a tuning parameter for the development of a classifier.



k-means clustering

Hierarchical clustering

For the kth cluster, define the Error Sum of Squares as ESS_m = sum of squared deviations (squared Euclidean distance) from the cluster centroid. ESS_m = \sum_{l=1}^{n_m}\sum_{k=1}^p (x_{ml,k} - \bar{x}_{m,k})^2 in which \bar{x}_{m,k} = (1/n_m) \sum_{l=1}^{n_m} x_{ml,k} the mean of the mth cluster for the kth variable, x_{ml,k} being the score on the kth variable (k=1,\dots,p) for the lth object (l=1,\dots,n_m) in the mth cluster (m=1,\dots,g).

If there are C clusters, define the Total Error Sum of Squares as Sum of Squares as 
ESS = \sum_m ESS_m, m=1,\dots,C

Consider the union of every possible pair of clusters.

Combine the 2 clusters whose combination combination results in the smallest increase in ESS.


  1. Ward's method tends to join clusters with a small number of observations, and it is strongly biased toward producing clusters with the same shape and with roughly the same number of observations.
  2. It is also very sensitive to outliers. See Milligan (1980).

Take pomeroy data (7129 x 90) for an example:


lr = read.table("C:/ArrayTools/Sample datasets/Pomeroy/Pomeroy -Project/NORMALIZEDLOGINTENSITY.txt")
lr = as.matrix(lr)
method = "average" # method <- "complete"; method <- "ward.D"; method <- "ward.D2"
hclust1 <- function(x) hclust(x, method= method)
heatmap.2(lr, col=bluered(75), hclustfun = hclust1, distfun = dist,
    "density", scale = "none",               
              key=FALSE, symkey=FALSE, trace="none", 
              main = method)

Hc ave.png Hc com.png Hc ward.png

Density based clustering

Optimal number of clusters

Mixed Effect Model

Model selection criteria


How to judge if a supervised machine learning model is overfitting or not?



Entropy is defined by -log2(p) where p is a probability. Higher entropy represents higher unpredictable of an event.

Some examples:

  • Fair 2-side die: Entropy = -.5*log2(.5) - .5*log2(.5) = 1.
  • Fair 6-side die: Entropy = -6*1/6*log2(1/6) = 2.58
  • Weighted 6-side die: Consider pi=.1 for i=1,..,5 and p6=.5. Entropy = -5*.1*log2(.1) - .5*log2(.5) = 2.16 (less unpredictable than a fair 6-side die).


When entropy was applied to the variable selection, we want to select a class variable which gives a largest entropy difference between without any class variable (compute entropy using response only) and with that class variable (entropy is computed by adding entropy in each class level) because this variable is most discriminative and it gives most information gain. For example,

  • entropy (without any class)=.94,
  • entropy(var 1) = .69,
  • entropy(var 2)=.91,
  • entropy(var 3)=.725.

We will choose variable 1 since it gives the largest gain (.94 - .69) compared to the other variables (.94 -.91, .94 -.725).

Why is picking the attribute with the most information gain beneficial? It reduces entropy, which increases predictability. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.

Consider a split of a continuous variable. Where should we cut the continuous variable to create a binary partition with the highest gain? Suppose cut point c1 creates an entropy .9 and another cut point c2 creates an entropy .1. We should choose c2.


In addition to information gain, gini (dʒiːni) index is another metric used in decision tree. See wikipedia page about decision tree learning.


Combining classifiers. Pro: better classification performance. Con: time consuming.



Draw N bootstrap samples and summary the results (averaging for regression problem, majority vote for classification problem). Decrease variance without changing bias. Not help much with underfit or high bias models.

Random forest

Variance importance: if you scramble the values of a variable, and the accuracy of your tree does not change much, then the variable is not very important.

Why is it useful to compute variance importance? So the model's predictions are easier to interpret (not improve the prediction performance).

Random forest has advantages of easier to run in parallel and suitable for small n large p problems.


Instead of selecting data points randomly with the boostrap, it favors the misclassified points.


  • Initialize the weights
  • Repeat
    • resample with respect to weights
    • retrain the model
    • recompute weights

Since boosting requires computation in iterative and bagging can be run in parallel, bagging has an advantage over boosting when the data is very large.

Time series

Ensemble learning for time series forecasting in R



Distribution of p values in medical abstracts

nominal p-value and Empirical p-values

  • Nominal p-values are based on asymptotic null distributions
  • Empirical p-values are computed from simulations/permutations

(nominal) alpha level

Conventional methodology for statistical testing is, in advance of undertaking the test, to set a NOMINAL ALPHA CRITERION LEVEL (often 0.05). The outcome is classified as showing STATISTICAL SIGNIFICANCE if the actual ALPHA (probability of the outcome under the null hypothesis) is no greater than this NOMINAL ALPHA CRITERION LEVEL.


Let \scriptstyle\hat\beta be an estimator of parameter β in some statistical model. Then a t-statistic for this parameter is any quantity of the form

    t_{\hat{\beta}} = \frac{\hat\beta - \beta_0}{\mathrm{s.e.}(\hat\beta)},

where β0 is a non-random, known constant, and \scriptstyle s.e.(\hat\beta) is the standard error of the estimator \scriptstyle\hat\beta.

Two sample test assuming equal variance

The t statistic to test whether the means are different can be calculated as follows:

t = \frac{\bar {X}_1 - \bar{X}_2}{s_{X_1X_2} \cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}


 s_{X_1X_2} = \sqrt{\frac{(n_1-1)s_{X_1}^2+(n_2-1)s_{X_2}^2}{n_1+n_2-2}}.

s_{X_1X_2} is an estimator of the common/pooled standard deviation of the two samples. The square-root of a pooled variance estimator is known as a pooled standard deviation.

The degrees of freedom is : n_1 + n_2 - 2.

Two sample test assuming unequal variance

The t statistic (Behrens-Welch test statistic) to test whether the population means are different is calculated as:

t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}}


s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{s_1^2 \over n_1} + {s_2^2  \over n_2}}.

Here s2 is the unbiased estimator of the variance of the two samples.

The degrees of freedom is evaluated using the Satterthwaite's approximation

df = { ({s_1^2 \over n_1} + {s_2^2 \over n_2})^2  \over {({s_1^2 \over n_1})^2 \over n_1-1} + {({s_2^2 \over n_2})^2 \over n_2-1} }.

Unpooled vs pooled methods

Paired test

Have you ever asked yourself, "how should I approach the classic pre-post analysis?"


If the population parameters are known, then rather than computing the t-statistic, one can compute the z-score.

Nonparametric test: Wilcoxon rank sum test

Sensitive to differences in location

Nonparametric test: Kolmogorov-Smirnov test

Sensitive to difference in shape and location of the distribution functions of two groups

Empirical Bayes method

See Bioconductor's limma, RnBeads, IMA, minfi packages.


TukeyHSD, diagnostic checking

  • TukeyHSD for the pairwise tests
  • Shapiro-Wilk test for normality
  • Bartlett test and Levene test for the homogeneity of variances across the groups

Repeated measure

Sample Size

Goodness of fit

Chi-square tests

Fitting distribution

Fitting distributions with R

Contingency Tables

Odds ratio and Risk ratio

The ratio of the odds of an event occurring in one group to the odds of it occurring in another group

         drawn   | not drawn | 
white |   A      |   B       | Wh
black |   C      |   D       | Bk
  • Odds Ratio = (A / C) / (B / D) = (AD) / (BC)
  • Risk Ratio = (A / Wh) / (C / Bk)

Hypergeometric, One-tailed Fisher exact test

         drawn   | not drawn | 
white |   x      |           | m
black |  k-x     |           | n
      |   k      |           | m+n

For example, k=100, m=100, m+n=1000,

> 1 - phyper(10, 100, 10^3-100, 100, log.p=F)
[1] 0.4160339
> a <- dhyper(0:10, 100, 10^3-100, 100)
> cumsum(rev(a))
  [1] 1.566158e-140 1.409558e-135 3.136408e-131 3.067025e-127 1.668004e-123 5.739613e-120 1.355765e-116
  [8] 2.325536e-113 3.018276e-110 3.058586e-107 2.480543e-104 1.642534e-101  9.027724e-99  4.175767e-96
 [15]  1.644702e-93  5.572070e-91  1.638079e-88  4.210963e-86  9.530281e-84  1.910424e-81  3.410345e-79
 [22]  5.447786e-77  7.821658e-75  1.013356e-72  1.189000e-70  1.267638e-68  1.231736e-66  1.093852e-64
 [29]  8.900857e-63  6.652193e-61  4.576232e-59  2.903632e-57  1.702481e-55  9.240350e-54  4.650130e-52
 [36]  2.173043e-50  9.442985e-49  3.820823e-47  1.441257e-45  5.074077e-44  1.669028e-42  5.134399e-41
 [43]  1.478542e-39  3.989016e-38  1.009089e-36  2.395206e-35  5.338260e-34  1.117816e-32  2.200410e-31
 [50]  4.074043e-30  7.098105e-29  1.164233e-27  1.798390e-26  2.617103e-25  3.589044e-24  4.639451e-23
 [57]  5.654244e-22  6.497925e-21  7.042397e-20  7.198582e-19  6.940175e-18  6.310859e-17  5.412268e-16
 [64]  4.377256e-15  3.338067e-14  2.399811e-13  1.626091e-12  1.038184e-11  6.243346e-11  3.535115e-10
 [71]  1.883810e-09  9.442711e-09  4.449741e-08  1.970041e-07  8.188671e-07  3.193112e-06  1.167109e-05
 [78]  3.994913e-05  1.279299e-04  3.828641e-04  1.069633e-03  2.786293e-03  6.759071e-03  1.525017e-02
 [85]  3.196401e-02  6.216690e-02  1.120899e-01  1.872547e-01  2.898395e-01  4.160339e-01  5.550192e-01
 [92]  6.909666e-01  8.079129e-01  8.953150e-01  9.511926e-01  9.811343e-01  9.942110e-01  9.986807e-01
 [99]  9.998018e-01  9.999853e-01  1.000000e+00

# Density plot
plot(0:100, dhyper(0:100, 100, 10^3-100, 100), type='h')



  1 - phyper(q=10, m, n, k) 
= 1 - sum_{x=0}^{x=10} phyper(x, m, n, k)
= 1 - sum(a[1:11]) # R's index starts from 1.

Another example is the data from the functional annotation tool in DAVID.

               | gene list | not gene list | 
pathway        |   3  (q)  |               | 40 (m)
not in pathway |  297      |               | 29960 (n)
               |  300 (k)  |               | 30000

The one-tailed p-value from the hypergeometric test is calculated as 1 - phyper(3-1, 40, 29960, 300) = 0.0074.

Fisher's exact test

Following the above example from the DAVID website, the following R command calculates the Fisher exact test for independence in 2x2 contingency tables.

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2)) #  alternative = "two.sided" by default

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
  1.488738 23.966741
sample estimates:
odds ratio

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="greater")

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is greater than 1
95 percent confidence interval:
 1.973   Inf
sample estimates:
odds ratio

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="less")

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.9991
alternative hypothesis: true odds ratio is less than 1
95 percent confidence interval:
  0.00000 20.90259
sample estimates:
odds ratio

From the documentation of fisher.test

     fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE,
                 control = list(), or = 1, alternative = "two.sided",
        = TRUE, conf.level = 0.95,
                 simulate.p.value = FALSE, B = 2000)
  • For 2 by 2 cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution.
  • For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one.
  • The alternative for a one-sided test is based on the odds ratio, so ‘alternative = "greater"’ is a test of the odds ratio being bigger than ‘or’.
  • Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities.

Chi-square independence test

Exploring the underlying theory of the chi-square test through simulation - part 2


Determines whether an a priori defined set of genes shows statistically significant, concordant differences between two biological states

Two categories of GSEA procedures:

  • Competitive: compare genes in the test set relative to all other genes.
  • Self-contained: whether the gene-set is more DE than one were to expect under the null of no association between two phenotype conditions (without reference to other genes in the genome). For example the method by Jiang & Gentleman Bioinformatics 2007

Confidence vs Credibility Intervals

Power Analysis

Power analysis for default Bayesian t-tests

Using simulation for power analysis: an example based on a stepped wedge study design

Power analysis and sample size calculation for Agriculture

Power calculation for proportions (shiny app)

Common covariance structures

See Assume covariance \Sigma = (\sigma_{ij})_{p\times p}

  • Diagonal structure: \sigma_{ij} = 0 if i \neq j.
  • Compound symmetry: \sigma_{ij} = \rho if i \neq j.
  • First-order autoregressive AR(1) structure: \sigma_{ij} = \rho^{|i - j|}.
  • Banded matrix: \sigma_{ii}=1, \sigma_{i,i+1}=\sigma_{i+1,i} \neq 0, \sigma_{i,i+2}=\sigma_{i+2,i} \neq 0 and \sigma_{ij}=0 for |i-j| \ge 3.
  • Spatial Power
  • Unstructured Covariance
  • Toeplitz structure

Counter/Special Examples

Correlated does not imply independence

Suppose X is a normally-distributed random variable with zero mean. Let Y = X^2. Clearly X and Y are not independent: if you know X, you also know Y. And if you know Y, you know the absolute value of X.

The covariance of X and Y is

  Cov(X,Y) = E(XY) - E(X)E(Y) = E(X^3) - 0*E(Y) = E(X^3)
           = 0, 

because the distribution of X is symmetric around zero. Thus the correlation r(X,Y) = Cov(X,Y)/Sqrt[Var(X)Var(Y)] = 0, and we have a situation where the variables are not independent, yet have (linear) correlation r(X,Y) = 0.

This example shows how a linear correlation coefficient does not encapsulate anything about the quadratic dependence of Y upon X.

Spearman vs Pearson correlation

Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship.

cor(x,y, method='spearman') # 1
cor(x,y, method='pearson')  # .25

Spearman vs Wilcoxon

By this post

  • Wilcoxon used to compare categorical versus non-normal continuous variable
  • Spearman's rho used to compare two continuous (including ordinal) variables that one or both aren't normally distributed

Anscombe quartet

Four datasets have almost same properties: same mean in X, same mean in Y, same variance in X, (almost) same variance in Y, same correlation in X and Y, same linear regression.

Anscombe's quartet 3.svg

The real meaning of spurious correlations

spurious_data <- data.frame(x = rnorm(500, 10, 1),
                            y = rnorm(500, 10, 1),
                            z = rnorm(500, 30, 3))
cor(spurious_data$x, spurious_data$y)
# [1] -0.05943856
spurious_data %>% ggplot(aes(x, y)) + geom_point(alpha = 0.3) + 
theme_bw() + labs(title = "Plot of y versus x for 500 observations with N(10, 1)")

cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.4517972
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
 theme_bw() + geom_smooth(method = "lm") + 
scale_color_gradientn(colours = c("red", "white", "blue")) + 
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 3)")

spurious_data$z <- rnorm(500, 30, 6)
cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.8424597
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) + 
theme_bw() + geom_smooth(method = "lm") + 
scale_color_gradientn(colours = c("red", "white", "blue")) + 
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 6)")

Time series

Structural change

Structural Changes in Global Warming


  • Prognosis is the probability that an event or diagnosis will result in a particular outcome.


Eleven quick tips for finding research data