 # Calculating difference in medians (with CIs)

Hi,

I wish to compare variables for patients with, and without, a certain mutation. The data is non-parametric.

I have assessed if there is a significant difference for each variable e.g. serum potassium by doing Mann-Whitney-U tests (called two-sample Wilcoxon test on R), with the wilcox.test() function.

I would like to also report the difference of the medians with confidence intervals (alongside the p-value for Mann-Whitney-U).

Note the wilcox.test() function reports confidence intervals of the median of the differences, which is not the same thing.

I have tried - and failed - using the twomeans() function with the dplyr package, and also the median.diff() function in the pairwiseCI package.

Any help gratefully received!

I must preface with the disclaimer that, in essence, I don't know what I'm talking about in the world of nonparametric statistics, among many other universes. So let me be the slave boy at the Symposium.

You have a series of paired medians and their differences, and you want to be able to bracket the confidence intervals around the differences, if I've understood the question.

Do you have to have a distribution model in mind to calculate the CIs? Let D = difference(obs_1 - obs_2) .Is D random (goody, normal), or one of the other standard distributions? If it's an exotic distribution, which one? If indeterminate, how will it ever be possible ever to put CI around it?

All of which is to say, I have only a question, not an answer.

I'm glad to be a tabula rasa interlocutor on the problem, if that helps.

Since OP specifically mentions `"The data is non-parametric."`, I think nothing is known about the underlying distribution, except continuity. But I agree with Richard that this is a valid question.

Usually for pairwise data, Wilcoxon test converts two sample data to a one sample data of differences and check whether median of that distribution is 0 or not. If one finds the median of the two samples separately, and even if we assume that large sample approximations to distributions of sample medians are valid, how does one consider the dependence of the two samples? If the samples are independent, that's OK, but that's very unlikely considering the context of the problem.

I don't think `dplyr` has a function `twomeans`. And, please keep in mind R is case sensitive, the function in `pairwiseCI` is `Median.diff`, not `median.diff`.

Yes, I do know that the underlying data is non-parametric. I don't understand why that make medians derived from the data non-parametric, as well. Consider the classic `flights` database in the `nycflights13` library. Let `arr_delay` represent the difference in medians and arbitrarily model it on dep_time + distance.

``````library(nycflights13)
library(tidyverse)
#> Registered S3 methods overwritten by 'ggplot2':
#>   method         from
#>   [.quosures     rlang
#>   c.quosures     rlang
#>   print.quosures rlang
data(flights)
delta <- flights %>% select(arr_delay, dep_time, distance)
delta.fit <- lm(arr_delay ~ dep_time + distance, data = delta)
summary(delta.fit)
#>
#> Call:
#> lm(formula = arr_delay ~ dep_time + distance, data = delta)
#>
#> Residuals:
#>     Min      1Q  Median      3Q     Max
#>  -95.25  -23.31   -8.88    9.21 1294.06
#>
#> Coefficients:
#>               Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -1.792e+01  2.486e-01  -72.07   <2e-16 ***
#> dep_time     2.116e-02  1.551e-04  136.40   <2e-16 ***
#> distance    -3.554e-03  1.029e-04  -34.53   <2e-16 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 43.33 on 327343 degrees of freedom
#>   (9430 observations deleted due to missingness)
#> Multiple R-squared:  0.0574, Adjusted R-squared:  0.05739
#> F-statistic:  9967 on 2 and 327343 DF,  p-value: < 2.2e-16
confint(delta.fit)
#>                     2.5 %       97.5 %
#> (Intercept) -18.402990518 -17.42857004
#> dep_time      0.020853423   0.02146147
#> distance     -0.003755629  -0.00335215
``````

Created on 2019-06-13 by the reprex package (v0.3.0)

If you `plot(delta.fit)`, you'll see a Q-Q plot that's half normal and then climbs to its cruising altitude like a jet.

My point is that to get confidence intervals on the differences, you have to have some idea of how far your point departs from your test statistic.

For what that test statistic should be, I have to invoke Wittgenstein

Whereof one cannot speak, thereof one must be silent.

To clarify:
For those data in my dataset that are normally distributed, I have been able to calculate the difference in means, with corresponding p-values (t-test) and confidence intervals.
I had (naively) hoped to do the same for my ab-normal variables - mostly to fill empty gaps in the table I am compiling. I now see it is more complicated than that! I have no idea what distribution these variables are, only that the qqplots are a poor fit (so not normal).
Maybe it's a non-starter.

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.