How to calculate the Standard error on a logistic meta-analysis?


I have randomly broken down a very large dataset into 20 equal-sized blocks.
I have fit a logistic model with random effects on each block with R (lme4).
Say my model is simply:

lmer(y ~ X + Y + (1|city/ID), family = binomial, REML=FALSE))

y = a + b·X + c·Y + random term 

This gives me the intercept and each coefficient, and their standard errors and p-values on each block.
I have the output as a data.frame (or data.table) with one row per block and one column per coefficient and std.errors and p-values, but I can change it.

Now I would like to combine all the results to get a "global" or "averaged" model.
For example, for the coefficients "b":

  • I calculate the global b as the weighted average of all bi.
  • I calculate the standard error or p-value of the global b.

How can I get that standard error of the coefficients? Any solution with a formula or an R package function would be great.
How do I combine the standard errors to get a global standard error and then compute a p-value?
I think it's a kind of meta-analysis.


You're looking for Rubin's rules (Rubin, D.B. (1987). Multiple Imputation for Nonresponse in Surveys. New York: John Wiley and Sons.). The same problem occurs in multiple imputation, so you might be able to use the mice package to combine your estimates (but this will probably mean to turn your estimates in a mice-compatible object -- no idea how complicated this is): On the other hand, Rubin's rules are not very complicated -- it is probably also straightforward to write this yourself.


If you use brms and go Bayesian you can work with posterior samples directly and get a posterior distribution for whatever combination of the existing parameters you like.