Sunday, December 18, 2005

Outbreak Simulation

I. Relevance

In the absence of syndromic data that include a bioterrorist-related disease outbreak it is hard to evaluate the detection ability of different algorithms. This is on top of the complication of having natural disease oubreaks in the data, which are not easily labeled (when exactly was the last flu season in a certain geographical location?)

One approach has been to try and simulate signatures of such attacks and inject them into real, but attack-less data. A second approach has been to simulate the attack-less data as well. Yet, a different approach has been to try and model the consequences of a bio-agent release using meteorological and atmospheric models and use those to simulate an attack.

In this posting we concentrate on temporal data streams and algorithms. However, similar issues arise in spatial and spatio-temporal data and approaches.

II. Examples
In Goldenberg et al. (2002) we injected linearly increasing outbreaks into cough medication sales, over a 3-day period. We tried different slopes and different magnitudes.

Stoto et al. (2004) “seeded” real ER data with a “fast-outbreak” which was constructed as a 3-day linear increase in cases (adding 3,6, and 9 cases on the first, second, and third day, respectively), or a “slow-outbreak” constructed as a 9-day step function (adding 1,1,1,2,2,3,3,3 cases to each of the first through 9th days).

Burkom et al. (2005) simulated background (=attack-less) counts from a Poisson distribution. They then injected counts drawn randomly from a lognormal distribution, based on the lognormal distribution of incubation periods of infectious diseases.

III. Determining the outbreak structure
The main issue is that we do not really know what the signature of a bioterrorist attack disease outbreak would look like in medication sales, ER admissions, etc. In particular
o Different types of outbreaks can lead to different signatures
o Different data streams might have different “reactions” to outbreaks
What we do have is some knowledge on disease progression. The lognormal curve arrives from such information. But what does it measure when it comes to syndromic data streams? According to Burkom et al. (2005),
“The incubation period distribution was used to estimate the idealized curve for the expected number of new symptomatic cases on each outbreak day. The lognormal parameters were chosen to give a median incubation period of 3.5 days, consistent with the symptomatology of known weaponized diseases and a temporal case dispersion consistent with previously observed outbreaks”
The question is what can be inferred from disease progression to the manifestation of the outbreak in pre-diagnosis data? There will clearly be large effects of media, word-of-mouth, and mass psychology. Can these be integrated to some degree?

Another approach has been to model behavior at the individual level. Wong et al. (2005) consider the fact that “the majority of the background knowledge of the characteristics of respiratory anthrax disease is at an individual rather than a population level.” They therefore build a model based on “person-level” activity for detecting infectious but noncontagious diseases such as Anthrax.

IV. Implications
Given that outbreak simulation is used for the purpose of evaluating the performance of detection algorithms, the main issue with simulating a pre-defined outbreak shape is that we can design the most efficient monitoring algorithm to detect the particular simulated outbreak shape!!!

For instance, it can be shown that the Shewhart chart is most efficient at detecting a (large) single spike, a Cusum chart for detecting a step function, an EWMA chart for detecting an exponential increase, etc. (Box & Luceño, 1997)

In the recent Bio-ALIRT competition, a group of medical and epidemiological experts examined the datasets and by eyeballing and using a Cusum chart determined when there were outbreaks:
“Using visual and statistical techniques, ODG found evidence of disease outbreaks in the data” (Siegrist & Pavlin, 2004)

The participating groups were then to detect those outbreaks. Clearly, those who used a Cusum or algorithms that mimic human sight were most likely to do “best”.

V. Injecting simulated outbreaks
Another issue with outbreak simulation is how it to inject it into the no-outbreak data. Clearly there are some periods when it is more likely to get detected than others.

In Goldenberg et al. (2002) we injected the simulated outbreak at every point in the series, and then evaluated an overall rate of how many times it was detected (as well as false alarms). A similar approach was taken in Stoto et al. (2004)

In contrast, Burkom et al. (2005) injected the simulated outbreak at a randomly chosen start day (recall that their background data is in itself Poisson-simulated data).

VI. Some solutions
Since we really do not know the shape or magnitude of an outbreak, one approach is to simulate a range of different outbreaks and then evaluate algorithms over all the different types. This will most likely give preference to algorithms that are not very tightly coupled with a certain outbreak type (e.g., wavelets or other multi-resolution methods).

A practical consideration is to choose an outbreak duration that is not longer than the period that we would act upon. For instance, if detecting an Anthrax attack occurs 3 days later, then it is too late. In that sense we can consider just the outbreak signature in its first three days.

VII. References
Box, G. and Luceño, A. (1997). Statistical Control: By Monitoring and Feedback Adjustment. Wiley-Interscience, 1st edition.

Burkom, H, Murphy, S, Coberly, J, and Hurt-Mullen K (2005), Public Health Monitoring Tools for Multiple Data Streams, MMWR 54 (suppl), 55-62.

Goldenberg A, Smueli G, Caruana RA and Fienberg SE (2002). Early Statistical Detection of Anthrax Outbreaks by Tracking Over-the-Counter Medication Sales. PNAS, 99 (8), 5237-5240.
Siegrist, D and Pavlin, J (2004), Bio-ALIRT Biosurveillance Detection Algorithm Evlauation, MMWR 53 (suppl), 152-158.

Stoto MA, Schonlau M, Mariano LT (2004). Syndromic Surveillance: Is it Worth the Effort? Chance, 17 (1), 19-24.

Wong, W-K, Cooper, G, Dash, D, Levander, J., Dowling, J, Hogan, W, and Wagner M (2005), Use of Multiple Data Streams to Conduct Bayesian Biologic Surveillance, MMWR 54 (suppl), 63-69.

Thursday, November 10, 2005

Multiple Testing

The goal of this post is to generate reactions on how we should tackle the issue of multiple testing that takes place in biosurveillance systems. It considers multiple testing in the context of biosurveillance. The notes are based on personal opinions and questions and on conversations with Dr. Howard Burkom from the Johns Hopkins Applied Physics Lab.

I. RELEVANCE
Multiplicity occurs at multiple levels within a biosurveillance system:

  1. Regional level-- When monitoring multiple regions. This is also true within a region, where we are monitoring multiple locations (e.g., hospitals, offices, stores)
  2. Source level-- Within a region we are monitoring multiple sources (OTC, ER…)
  3. Series level -- within each data source we are monitoring multiple series. Sometimes multiple series are created from a single series by stratifying the data by age group or gender.
  4. Algorithm level -- Within a single series, using multiple algorithms (e.g., for detecting changes in different parameters) or even a method such as wavelets that breaks down a single series into multiple series

The multiplicity actually plays a slightly different role in each case, because we have different numbers and sets of hypotheses.

Howard Burkom et al. (2005) coin the terms “parallel monitoring” vs. “consensus monitoring” to distinguish between the case of multiple hypotheses being tested simultaneously by multiple independent data (“parallel”) and the case of monitoring multiple data sources for testing a single hypothesis (“consensus”). According to this distinction we have parallel testing at the regional level, but consensus monitoring at the source-, series-, and algorithm-level.

Are the two types of multiplicity conceptually different? Should the multiple results (e.g., p-values) be combined in the same way?


II. HYPOTHESES
Regional level
-- Each region has a separate null hypothesis. For region i we have
H0: no outbreak in region i
H1: outbreak in region i
Therefore we have multiple sets of hypotheses.
If we consider singular bioterrorist attacks, then these sets (and the tests) are independent.
If we expect a coordinated terrorist attack at multiple locations simultaneously, then there is positive dependence. For an epidemic, there is positive dependence
Source level -- even if we limit ourselves to a certain geographic location and, say, a single zipcode, then we have multiple data sources. In this case we have a single conceptual null hypothesis for all sources:
H0: no outbreak in this region
H1: outbreak in this region

However, we should really treat the outbreak occurrence as a hidden event. We are using the syndromic data as a proxy for measuring the hidden Bernoulli variable, and in fact testing source-specific hypotheses:
H0: no (outbreak-related) increase in OTC sales
H1: (outbreak-related) increase in OTC sales
When we test this we ignore the (outbreak-related) part. We only search for increases in OTC sales or ER admissions, etc. and try to eliminate as many external factors as possible (promotions, day of week, etc.) To see the added level of uncertainty in using proxy information, consider the following diagram:

When there is no outbreak we might still be getting false alarms that we would not have received had we been measuring a direct manifestation of the outbreak (such as laboratory results). For example, a new pharmacy opening up in a monitored chain would show an increase in medication sales, which might cause an alert. So we should expect a much higher false alarm rate.

On the other hand, in the presence of an outbreak we are likely to miss it if it does not get manifested in the data.

So the underlying assumptions when monitoring syndromic data are
(1) The probability of outbreak-related anomalies manifesting themselves in the data is high (removing red nodes from tree)
(2) The probability of an alarm due to non-outbreak reasons is minimal (removing blue nodes from tree)

Based on these two assumptions, most algorithms are designed to test:
H0: No change in parameter of syndromic data
H1: change in parameter of syndromic data
With respect to assumption (2), it has been noted that there are many cases where non-outbreak reasons lead to alarms. Thus, controlling tightly for those is importance. Alternatively, the false alarm rate (and correct detection rate) should be adjusted to account for these additional factors. These same issues are true for the series- and algorithm-level monitoring.

Series level – multiple series within a data source are usually collected for monitoring different syndromes. For instance: cough medication/cc, fever medication/cc, etc. This is also how CDC is thinking about the multiple series, by grouping ICD-9 codes into (11) categories by symptoms. If we treat each symptom separately then we have 11 tests going on.
H0: no increase in syndrome j
H1: increase in syndrome j

Ideally, we’d look at the specific combination of symptoms (i.e. a syndrome) that increase to better understand which disease is being spread. Also, we believe that an outbreak will lead to an increase in multiple symptoms. So these hypotheses are really related. Again, the conditional part comes in (whether there is an outbreak + the additional level of uncertainty on if/how the syndromic data will show this)

Algorithm level – multiple algorithms running on a same series might be used because each algorithm is looking for a different type of signal. This would give a single H0 but multiple H1
H0: Series mean is same
H1: change in series mean of type k
Or, if we have algorithms that monitor different parameters (cusum for mean and F-statistic for variance), then we also have a multiplicity of H0. Finally, algorithms such as wavelet-based MS SPC break down the series into multiple resolutions. Then, if we test at each resolution we have a multiplicity. Whether the resolutions are correlated or not depends on the algorithm (e.g., using downsampling or not).

III. HANDLING MULTIPLE TESTING STATISTICALLY
There are several methods for handling multiple testing, ranging from Bonferroni type corrections to Hochberg & Benjamini’s False Discovery Rate (and its variants). There are also Bayesian methods aimed at tackling this problem. Each of the methods has its limitations (Bonferroni is considered over-conservative, FDR corrections depend on the number of hypotheses and are problematic with too few hypotheses, and Bayesian methods are sensitive to the choice of prior and it is unclear how to choose a prior). Howard Burkom et al. (2005) consider these methods for correcting for "parallel monitoring" (multiple hypotheses with independent data streams). For "consensus monitoring" they consider a different set of methods for combining the multiple p-values, from the world of clinical trials. These include Fisher’s statistic and Edgington’s method (that can be approximated for a large number of tests by . Burkom et al. (2005) discuss the advantages and disadvantages of these two methods in the context of the ESSENCE surveillance system.

But should we really use different methods for accounting for the multiplicity? What is the link between the actual corrections and the conceptual differences?

IV. WHAT CAN BE DONE?
  1. Rate the quality of data sources: signaling by more reliable sources should be more heavily weighted
  2. Evaluate the risk level of the different regions: alarms in higher-risk regions should be taken more seriously (like Vicky Bier’s arguments about investment in higher-risk cases)
  3. “The more the merrier” is probably not a good strategy when concerning the number of data sources. It is better to invest in few reliable data sources than in multiple less-reliable ones. Along the same lines, the series chosen to be monitored should also be carefully screened according to their real contribution and their reliability. With respect to regions, better to monitor more risky regions (in the context of bioterrorist attacks or epidemics).
  4. Solutions should depend on who the monitoring body is: national surveillance systems (e.g., the BioSense by CDC) have more of the regional issue than local systems.
  5. The choice of symptom grouping and syndrome definitions, which is currently based on medical considerations (http://www.bt.cdc.gov/surveillance/syndromedef/index.asp), would benefit from incorporating statistical considerations.

V. REFERENCES

Burkom, H. S, Murphy, S., Coberly, J., and Hurt-Mullen, K. “Public Health Monitoring Tools for Multiple Data Streams”, MMWR, Aug 26, 2005 / 54(Suppl);55-62.


Marshall, C., Best Nicky, Bottle, A., and Aylin, P. “Statistical Issues in the Prospective Monitoring of Health Outcomes Across Multiple Sources”, JRSS A, 2004, vol 167 (3), pp. 541-559.