Skip to main content

Currently Skimming:

Appendix D: E. Coli Assessment: Some Comments by Edmund Crouch, PhD, Cambridge Environmental Inc.
Pages 147-162

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 147...
... They were specifically written for an audience who had ready access to the draft USDA risk assessment and spreadsheet, and it will be difficult for anyone who does not have these documents to understand them. GENERAL I was requested to examine in particular the implementation of the model described in this Risk Assessment, so the following concentrates on the spreadsheet, although unavoidably I have to comment on other matters as well.
From page 148...
... SOME IMPLEMENTATION ISSUES Notation In what follows, an unqualified page number refers to the E cold risk assessment, PDF version.
From page 149...
... Without some translation, I find this list uninformative. Trying to compare with the VBA code SimUncertainty (where the cell references are given in the cell~row,col)
From page 150...
... Module RunSegments; Subroutine GrowthOnly sGrinder not defined sCoreModelBook not defined sResultsBook not defined Module Functions; Function SortString When Risk is not loaded, the built-in VBA function Mid is not recognized unless all references to Risk are removed (for example, on the Tools:References list in the VBA editor)
From page 151...
... was estimated by combining the results from Equation 3.2 across all seven studies using Equation 3.4." The seven studies referred to are in Table 3-1 on page 36. The spreadsheet model lists 10 studies (columns C:L)
From page 152...
... Thus the spreadsheet headings do not match internally, nor with Table 3-1 (page 36~. This confusion presumably originally arose because of the additional information included in the spreadsheet at Is, that some studies did not include the sampling of adult cattle.
From page 153...
... The final normalization is not necessary, because the RiskDiscrete function subsequently used (at F33) performs such a normalization internally (this feature is apparently undocumented, but is essential for such a function)
From page 154...
... , it must necessarily be a truncated exponential (so that its mean and standard deviation are not quite 0, contrary to page 39~. Examination of the spreadsheet entries at C12:L12 (in the row labeled "Herd sensitivity")
From page 155...
... The RiskBeta function samples from a beta distribution corresponding to a binomial observation, and the result is divided by a test sensitivity that is itself a sample drawn from a beta distribution, again based on a binomial distribution. The impossible situation of the "apparent prevalence" higher than the "test sensitivity" is handled by substituting unity for the ratio, but the correct approach to obtain a sample distribution proportional to the conditional likelihood is to censor this sample combination, not to arbitrarily replace it.
From page 156...
... The method adopted for obtaining seasonal averages of within-herd prevalence appears to be ad hoc. Were complete-year data available for
From page 157...
... If there is any contrast between seasons, the second imputation forces the analysis to underestimate it. On the other hand, since some studies provide information only within particular months, and there are possibly substantial differences between studies in the same months, the method adopted necessarily confounds seasonal differences with study differences.
From page 158...
... At pages 65-66, we have "the reduction from decontamination (D1) was modeled using a triangular distribution with a minimum value of 0 logs, an uncertain most likely value ranging from 0.3 to 0.7 logs, and an uncertain maximum value ranging from 0.8 logs to 1.2 logs." In fact, although not specified here, the two uncertainty ranges given were used to define uncorrelated uniform distributions.
From page 159...
... The spreadsheet takes the same values for the "most likely" and "maximum" log reductions from their uncertainty distributions, but implements the variability distributions through separate instances of triangular distributions. A FEW OTHER SPECIFIC COMMENTS Page 18, pare -2 In Washington state...
From page 160...
... What the writers intended to say was that w is selected from an exponential variability distribution on each variability iteration, or something of that nature. The spreadsheet removes the unnecessary summation.
From page 161...
... (2000) data and a uniform distribution with a minimum approaching O and a maximum of the summer TR." What is done in the spreadsheet is to use a 50% probability mixture between the ratio of betas and a uniform ranging from zero to that ratio.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.