Thursday, November 10, 2005

Multiple Testing

The goal of this post is to generate reactions on how we should tackle the issue of multiple testing that takes place in biosurveillance systems. It considers multiple testing in the context of biosurveillance. The notes are based on personal opinions and questions and on conversations with Dr. Howard Burkom from the Johns Hopkins Applied Physics Lab.

I. RELEVANCE
Multiplicity occurs at multiple levels within a biosurveillance system:

  1. Regional level-- When monitoring multiple regions. This is also true within a region, where we are monitoring multiple locations (e.g., hospitals, offices, stores)
  2. Source level-- Within a region we are monitoring multiple sources (OTC, ER…)
  3. Series level -- within each data source we are monitoring multiple series. Sometimes multiple series are created from a single series by stratifying the data by age group or gender.
  4. Algorithm level -- Within a single series, using multiple algorithms (e.g., for detecting changes in different parameters) or even a method such as wavelets that breaks down a single series into multiple series

The multiplicity actually plays a slightly different role in each case, because we have different numbers and sets of hypotheses.

Howard Burkom et al. (2005) coin the terms “parallel monitoring” vs. “consensus monitoring” to distinguish between the case of multiple hypotheses being tested simultaneously by multiple independent data (“parallel”) and the case of monitoring multiple data sources for testing a single hypothesis (“consensus”). According to this distinction we have parallel testing at the regional level, but consensus monitoring at the source-, series-, and algorithm-level.

Are the two types of multiplicity conceptually different? Should the multiple results (e.g., p-values) be combined in the same way?


II. HYPOTHESES
Regional level
-- Each region has a separate null hypothesis. For region i we have
H0: no outbreak in region i
H1: outbreak in region i
Therefore we have multiple sets of hypotheses.
If we consider singular bioterrorist attacks, then these sets (and the tests) are independent.
If we expect a coordinated terrorist attack at multiple locations simultaneously, then there is positive dependence. For an epidemic, there is positive dependence
Source level -- even if we limit ourselves to a certain geographic location and, say, a single zipcode, then we have multiple data sources. In this case we have a single conceptual null hypothesis for all sources:
H0: no outbreak in this region
H1: outbreak in this region

However, we should really treat the outbreak occurrence as a hidden event. We are using the syndromic data as a proxy for measuring the hidden Bernoulli variable, and in fact testing source-specific hypotheses:
H0: no (outbreak-related) increase in OTC sales
H1: (outbreak-related) increase in OTC sales
When we test this we ignore the (outbreak-related) part. We only search for increases in OTC sales or ER admissions, etc. and try to eliminate as many external factors as possible (promotions, day of week, etc.) To see the added level of uncertainty in using proxy information, consider the following diagram:

When there is no outbreak we might still be getting false alarms that we would not have received had we been measuring a direct manifestation of the outbreak (such as laboratory results). For example, a new pharmacy opening up in a monitored chain would show an increase in medication sales, which might cause an alert. So we should expect a much higher false alarm rate.

On the other hand, in the presence of an outbreak we are likely to miss it if it does not get manifested in the data.

So the underlying assumptions when monitoring syndromic data are
(1) The probability of outbreak-related anomalies manifesting themselves in the data is high (removing red nodes from tree)
(2) The probability of an alarm due to non-outbreak reasons is minimal (removing blue nodes from tree)

Based on these two assumptions, most algorithms are designed to test:
H0: No change in parameter of syndromic data
H1: change in parameter of syndromic data
With respect to assumption (2), it has been noted that there are many cases where non-outbreak reasons lead to alarms. Thus, controlling tightly for those is importance. Alternatively, the false alarm rate (and correct detection rate) should be adjusted to account for these additional factors. These same issues are true for the series- and algorithm-level monitoring.

Series level – multiple series within a data source are usually collected for monitoring different syndromes. For instance: cough medication/cc, fever medication/cc, etc. This is also how CDC is thinking about the multiple series, by grouping ICD-9 codes into (11) categories by symptoms. If we treat each symptom separately then we have 11 tests going on.
H0: no increase in syndrome j
H1: increase in syndrome j

Ideally, we’d look at the specific combination of symptoms (i.e. a syndrome) that increase to better understand which disease is being spread. Also, we believe that an outbreak will lead to an increase in multiple symptoms. So these hypotheses are really related. Again, the conditional part comes in (whether there is an outbreak + the additional level of uncertainty on if/how the syndromic data will show this)

Algorithm level – multiple algorithms running on a same series might be used because each algorithm is looking for a different type of signal. This would give a single H0 but multiple H1
H0: Series mean is same
H1: change in series mean of type k
Or, if we have algorithms that monitor different parameters (cusum for mean and F-statistic for variance), then we also have a multiplicity of H0. Finally, algorithms such as wavelet-based MS SPC break down the series into multiple resolutions. Then, if we test at each resolution we have a multiplicity. Whether the resolutions are correlated or not depends on the algorithm (e.g., using downsampling or not).

III. HANDLING MULTIPLE TESTING STATISTICALLY
There are several methods for handling multiple testing, ranging from Bonferroni type corrections to Hochberg & Benjamini’s False Discovery Rate (and its variants). There are also Bayesian methods aimed at tackling this problem. Each of the methods has its limitations (Bonferroni is considered over-conservative, FDR corrections depend on the number of hypotheses and are problematic with too few hypotheses, and Bayesian methods are sensitive to the choice of prior and it is unclear how to choose a prior). Howard Burkom et al. (2005) consider these methods for correcting for "parallel monitoring" (multiple hypotheses with independent data streams). For "consensus monitoring" they consider a different set of methods for combining the multiple p-values, from the world of clinical trials. These include Fisher’s statistic and Edgington’s method (that can be approximated for a large number of tests by . Burkom et al. (2005) discuss the advantages and disadvantages of these two methods in the context of the ESSENCE surveillance system.

But should we really use different methods for accounting for the multiplicity? What is the link between the actual corrections and the conceptual differences?

IV. WHAT CAN BE DONE?
  1. Rate the quality of data sources: signaling by more reliable sources should be more heavily weighted
  2. Evaluate the risk level of the different regions: alarms in higher-risk regions should be taken more seriously (like Vicky Bier’s arguments about investment in higher-risk cases)
  3. “The more the merrier” is probably not a good strategy when concerning the number of data sources. It is better to invest in few reliable data sources than in multiple less-reliable ones. Along the same lines, the series chosen to be monitored should also be carefully screened according to their real contribution and their reliability. With respect to regions, better to monitor more risky regions (in the context of bioterrorist attacks or epidemics).
  4. Solutions should depend on who the monitoring body is: national surveillance systems (e.g., the BioSense by CDC) have more of the regional issue than local systems.
  5. The choice of symptom grouping and syndrome definitions, which is currently based on medical considerations (http://www.bt.cdc.gov/surveillance/syndromedef/index.asp), would benefit from incorporating statistical considerations.

V. REFERENCES

Burkom, H. S, Murphy, S., Coberly, J., and Hurt-Mullen, K. “Public Health Monitoring Tools for Multiple Data Streams”, MMWR, Aug 26, 2005 / 54(Suppl);55-62.


Marshall, C., Best Nicky, Bottle, A., and Aylin, P. “Statistical Issues in the Prospective Monitoring of Health Outcomes Across Multiple Sources”, JRSS A, 2004, vol 167 (3), pp. 541-559.