Special Report 14: Insufficient Data Cases
by Brad Sparks
Very important point Jan makes:
What Jan is saying is that Battelle studied a bogus dataset called "Knowns" that in fact were saturated with and contaminated by junk cases with Insufficient Data (besides those already called Insufficient Info), which no one can tell are IFO's or UFO's, then compared this bogus hodgepodge of "Knowns" with the contaminated dataset called "Unknowns" (diluted with Poor and Doubtful cases, etc.) and then purported to compare them and find differences. When in fact they were comparing contaminated junk to (largely) contaminated junk cases. The proper procedure would have been to select a high-quality set of Knowns that were 100% certain IFO's (with NO data "not stated"!) with the Excellent Unknowns. (Oddly enough, the Condon Report presents a better comparison of UFO's with IFO's in Table 6 on pp. 796-797, Bantam ed.)
Even the data elements chosen by Battelle make little or no sense. How does Number of Objects distinguish between UFO's and IFO's? Are UFO's always, say, single objects and IFO's always multiple objects? No of course not! Most UFO's are single objects and most IFO's are single objects. "Comparing" the number of objects seen is statistically worthless. In the Speed of Object category, the largest subset was "NOT STATED" data!!!! How can that be anything other than a disguised Insufficient Data category of cases that should have been removed before statistical analysis??? Most of the Speed cases were subjective guesses by human observers that should have been rejected as well, and only verified radar-visual speeds and a few cases of reasonably accurate visual estimates. Etc. etc.