<strong;Pooling studies that are not identical</strong;
Ideally in a meta-analysis all the studies would be done identically with the same outcome and independent variables in the same populations. This ideal is never met in practice, so there is always some level of heterogeneity. The accepted way of handling this is to do a <em;random effects meta-analysis</em;. https://en.wikipedia.org/wiki/Study_heterogeneity" target="_blank";Wikipedia has a nice description of how this works: “This approach When there is heterogeneity that cannot readily be explained, one analytical approach is to incorporate it into a random effects model. A random effects meta-analysis model involves an assumption that the effects being estimated in the different studies are not identical, but follow some distribution. The model represents the lack of knowledge about why real, or apparent, treatment effects differ by treating the differences as if they were random.”
One important point that the Wikipedia entry does not mention is a random effects meta-analysis produces wider confidence intervals than a so-called fixed effects meta-analysis (which you can use when there is study homogeneity). The reason for this is the fact that you are accounting for between-study differences in a random effects meta-analysis, which adds another component to uncertainty in the estimates.
What this means in practical terms is that it is harder to reach statistical significance.
We used a random effects meta-analysis.
<strong;All the studies included in our meta-analysis shared two important elements</strong;
They all used smoking cessation (more precisely no longer smoking) as the outcome and they all compared people using e-cigarettes with people not using e-cigarettes.
There were many other details that varied – whether the study was longitudinal or cross-sectional, a clinical trial or observational study, whether the subjects were actively trying to quit smoking, where the people lived, when e-cigarette use was assessed, what covariates were controlled for, etc. We considered that in two ways.
First, as described above, we used a random effects meta-analysis in the paper.
Second, we did a sensitivity analysis to test whether any of these (and other) factors and quite convincing found that none of them made any significant difference in the overall conclusion that e-cigarettes were associated with less quitting. Indeed, in the original submission I think we looked at seven factors and added two more (totaling 9) in response to the reviewers’ comments. This sensitivity analysis is one of the most important contributions the paper makes because it shows that the theoretical problems that the critics raise are not, in fact, making much difference in the results.
<strong;All the studies did not have the purpose of studying e-cigarettes for cessation</strong;
As noted above, all the studies measured the same outcome: no longer smoking. The important point made in the paper is that we study e-cigarettes as they are used in the real world. This is a broader question than whether e-cigarettes used as part of a smoking cessation program help is what effect they are having on cessation overall. It is possible that they could be helping some people quit and serious harming others. The fact that we found overall less quitting among people in studies of people who were trying to quit shows that, at the very least, the latter group dominates the former. Moreover, the sensitivity analysis showed that whether the people were trying to quit did not affect the overall conclusion.
<strong;We included observational studies not just RCTs</strong;
It is important to keep in mind that e-cigarettes are not prescription drugs used under close medical supervision, but mass consumer products (a point we make in the paper). Understanding the impacts of e-cigarettes requires studying them in the real world, not just the highly artificial environment of a randomized clinical trial. In addition, lots of research on health effects and even effects of therapy is done in observational studies.
For these reasons, I actually think that the observational studies are more relevant than the RCTs. But we included both in our paper. Again, the sensitivity analysis showed that the study type didn’t significantly affect the results.
<strong;Other reasons that we can have confidence in our results</strong;
Despite the differences between the individual studies, the results were broadly consistent (Figures 2 and 3). From this perspective the heterogeneity strengthens the overall conclusion.
Another very well done (probably the best done) large longitudinal study (over 5000 men followed for a year with excellent control for confounders) was published after we completed our paper. It showed depressed quitting smoking among e-cigarette users that was consistent with our meta-analysis. The paper ishttp://www.smw.ch/content/smw-2016-14271/" target="_blank"; here.
<strong;Finally, there is no such thing as a perfect study</strong;
In our paper (near the end of the Discussion) we describe what a perfect study of e-cigarettes and quitting would look like and make the point that doing such a study is impossible. The fact that no study can do everything every critic can think of makes it possible to criticize any study. This nit picking is a well-established industry strategy for discounting studies they don’t like.
What we do, and, indeed, what meta-analysis done properly is for, is to look beyond the individual studies for overall patterns in the data.

## Some more technical points on using meta-analysis

<strong;Pooling studies that are not identical</strong;

Ideally in a meta-analysis all the studies would be done identically with the same outcome and independent variables in the same populations. This ideal is never met in practice, so there is always some level of heterogeneity. The accepted way of handling this is to do a <em;random effects meta-analysis</em;. https://en.wikipedia.org/wiki/Study_heterogeneity" target="_blank";Wikipedia has a nice description of how this works: “This approach When there is heterogeneity that cannot readily be explained, one analytical approach is to incorporate it into a random effects model. A random effects meta-analysis model involves an assumption that the effects being estimated in the different studies are not identical, but follow some distribution. The model represents the lack of knowledge about why real, or apparent, treatment effects differ by treating the differences as if they were random.”

One important point that the Wikipedia entry does not mention is a random effects meta-analysis produces wider confidence intervals than a so-called fixed effects meta-analysis (which you can use when there is study homogeneity). The reason for this is the fact that you are accounting for between-study differences in a random effects meta-analysis, which adds another component to uncertainty in the estimates.

What this means in practical terms is that it is harder to reach statistical significance.

We used a random effects meta-analysis.

<strong;All the studies included in our meta-analysis shared two important elements</strong;

They all used smoking cessation (more precisely no longer smoking) as the outcome and they all compared people using e-cigarettes with people not using e-cigarettes.

There were many other details that varied – whether the study was longitudinal or cross-sectional, a clinical trial or observational study, whether the subjects were actively trying to quit smoking, where the people lived, when e-cigarette use was assessed, what covariates were controlled for, etc. We considered that in two ways.

First, as described above, we used a random effects meta-analysis in the paper.

Second, we did a sensitivity analysis to test whether any of these (and other) factors and quite convincing found that none of them made any significant difference in the overall conclusion that e-cigarettes were associated with less quitting. Indeed, in the original submission I think we looked at seven factors and added two more (totaling 9) in response to the reviewers’ comments. This sensitivity analysis is one of the most important contributions the paper makes because it shows that the theoretical problems that the critics raise are not, in fact, making much difference in the results.

<strong;All the studies did not have the purpose of studying e-cigarettes for cessation</strong;

As noted above, all the studies measured the same outcome: no longer smoking. The important point made in the paper is that we study e-cigarettes as they are used in the real world. This is a broader question than whether e-cigarettes used as part of a smoking cessation program help is what effect they are having on cessation overall. It is possible that they could be helping some people quit and serious harming others. The fact that we found overall less quitting among people in studies of people who were trying to quit shows that, at the very least, the latter group dominates the former. Moreover, the sensitivity analysis showed that whether the people were trying to quit did not affect the overall conclusion.

<strong;We included observational studies not just RCTs</strong;

It is important to keep in mind that e-cigarettes are not prescription drugs used under close medical supervision, but mass consumer products (a point we make in the paper). Understanding the impacts of e-cigarettes requires studying them in the real world, not just the highly artificial environment of a randomized clinical trial. In addition, lots of research on health effects and even effects of therapy is done in observational studies.

For these reasons, I actually think that the observational studies are more relevant than the RCTs. But we included both in our paper. Again, the sensitivity analysis showed that the study type didn’t significantly affect the results.

<strong;Other reasons that we can have confidence in our results</strong;

Despite the differences between the individual studies, the results were broadly consistent (Figures 2 and 3). From this perspective the heterogeneity strengthens the overall conclusion.

Another very well done (probably the best done) large longitudinal study (over 5000 men followed for a year with excellent control for confounders) was published after we completed our paper. It showed depressed quitting smoking among e-cigarette users that was consistent with our meta-analysis. The paper ishttp://www.smw.ch/content/smw-2016-14271/" target="_blank"; here.

<strong;Finally, there is no such thing as a perfect study</strong;

In our paper (near the end of the Discussion) we describe what a perfect study of e-cigarettes and quitting would look like and make the point that doing such a study is impossible. The fact that no study can do everything every critic can think of makes it possible to criticize any study. This nit picking is a well-established industry strategy for discounting studies they don’t like.

What we do, and, indeed, what meta-analysis done properly is for, is to look beyond the individual studies for overall patterns in the data.