This position statement is based on what I knew about the Big Sur incident in 2006, but has been updated. My portion of this analysis deals primarily with the reported capabilities of the system then used at Big Sur. The paper is divided into three parts: 1) How the Lunascan Project work compares with the Big Sur/Boston University scope system; 2) What I contend the Big Sur system could actually see and record; and, 3) My comments about statements made by Kingston George, the Project Engineer, Operations Analyst for Headquarters, 1st Strategic Aerospace Division. At any point the reader can skip to the area they want to read.
April 7, 2006; updated September 16, 2012
One of the most controversial incidents in UFOlogy is the Big Sur incident of September 1964. An Air Force telescope tracking system allegedly filmed a missile warhead being taken down by a UFO off the California coast. A modified 90 mm military gun mount was the cradle for a 24" primary telescope mirror with a 240-inch basic focal length. An image orthicon television camera was the sensor, and the thirty-frame-per-second 875-line interlaced output was displayed on a monitor recorded by a kinescope in an accompanying van for recording by a 35 mm motion-picture camera. There were 11 launches, 9 of which were successful. Just how good would this circa 1964 system be?
Part 1 - A Tour of The Lunascan Project
I was one of the first to use security surveillance cameras to observe and record live images of the Moon. I began doing this in the 1970's but didn't get serious about it until 1995, mostly for financial reasons. I was tiring of the stories of lunar anomalies and was hearing reports of "fast walkers" being seen and recorded. Looking back, it was the Big Sur incident that pushed me over the line and, by the end of the summer I had a team together and raised some money and TLP (The Lunascan Project) was born. I thought I had retired from UFO work and was finally doing something I had dreamed of doing for over 20 years. Something more fun than just more hard work.
The very first camera that I used was, like the Image Orthocan at Big Sir, a vidicon camera. It worked. The problems I had were it was, 1) too light sensitive and not enough contrast on the day side of the Moon and too much light on the darkside. 2) "terminator flash" when crossing the terminator line (day/night line) during scans, 3) and after-image trails from bright objects during the scans (slices of the lunar surface obtained by the Earth's rotation).
The project used a 16" f/ 4.5, 1830 mm Newtonian reflector at first, then we built a smaller, but better, 10" f/6 with a 57.8" f/l. It was a modest effort with about a dozen team members and about $3,000 worth of optical equipment. The Big Sur system was as well thought-out as ours, long before it was put into service. They knew what it was capable of, even before they built the system. That, I'm sure. And they used it many times!
But first, a little about what I DO know about imaging based on my work on The Lunascan Project. Our modest project used off-the-shelf equipment, beginning with the 1/3" analog b&w Panasonic WV-450 vidicon tube surveillance camera. Soon after the project was started in the fall of 1995 we advanced to a 1/3rd" CCD camera with over 400 lines of resolution and a 525 lines/60 frames a second scanning system. Before long we had three such cameras, one for each scanning system: LPS (Low Power Scanning) Canon 450 mm; MPS (Medium Power Scanning), 4.25" f/8, 900 mm; and HPS (High Power Scanning). Briefly, the LPS was used to view the Moon and area around it in low power, and the outdated vidicon was used as the Findercam. The Big Sur system had something like this, called the Image Orthicon. But it also had the ability to zoom in and get good images as well as fully stretched high-powered images, depending on seeing conditions and ranges. Our MPS was about 220 power and its CRT monitor showed only a portion of the lunar globe globe. However, the HPS was running at maximum effective power and showed an area (FOV) of about 400 miles, simulating a 600-mile orbiting camera on the composite monitor. With the DOB Driver II computer we could hover over the 52-mile crater Copernicus for hours and watch all three monitors at the same time. With the adjustable T-C adaptor, or even the Barlow, we could pump the system's HPS power way over this, but the FOV was much smaller and we could miss something. IF we didn't have to contend with midwestern polluted skies. We had audio and video dubs of the Fort Collins Colorado WWV time signal as we recorded every frame, 108,000 images per hour. Each camera was just a little larger than a stick of butter and weighed about nine ounces, and had 512 by 412 pixels on an interline chip.
Using NTSC as an example, there are 525 scan lines (vertical resolution) total, but only 485 scan lines are used to comprise the basic detail in the image (the remaining lines are encoded with other information, such as closed captioning and other technical information).
Most analog TVs with at least composite AV inputs can display up to 450 lines of horizontal resolution, with higher-end monitors capable of much more. We used high quality computer game composite monitors. But most analog TV broadcast cameras of the period in question were about 330 lines res, about what we were able to achieve (400) with the newer CCD chips. If we could have afforded it, we would purchased at least one 600-line camera, but you had to have a 600-line monitor or you couldn't see any difference, let alone record it.A video camera chip has picture elements, or pixels, that now measure about 5.6 microns square. Normal high resolution photographs of the Moon require exposures of up to three seconds. This is long enough for atmospheric turbulence to blur the fine details that the tiny film grains are otherwise capable of capturing. In those three seconds we get 90 images. Only about 2% of the photons striking a photographic emulsion form the image. With CCD imaging almost half of the photons generate a detectable signal. The result is a 25-fold gain in detector efficiency.
Our older cameras were the GBC-400 and they have 1/3rd inch CCD chips (charge coupled device). One camera was connected to the massive 16" f/4.5 reflector using a T-C adaptor. The image of the Moon was projected through a 25 mm Plossyl eyepiece right onto the 1/3" chip. The Moon's 2,160 mile-wide image being projected about the size of a grapefruit (see chart below) was OVER-projected, and the chip itself was about the size of one of the tiny rectangles below that concentrated on an FOV of 400 miles of the lunar surface. That's where image projection produces astonishing power. At 400 power it is like orbiting at 600 miles above the Moon. But our newer Celestron NexImage CCD camera has a 1/4" chip and used image projection, no eyepiece.
Rukl Lunar Sections
Above you can see how a large bright lunar image can be focussed and yet have the area under the chip be dedicated to a small area on the Moon. This is what makes lunar imaging with CCD and CMOS cameras so powerful. Celestron has a new 5 megapixel CCD camera (usb type) for only $200.
I later downsized the main scope and built, almost from scratch, the 10" f/6. I was told it would have greater contrast and was more suited for our work. We used the same camera. Both scopes had been operated from a wheeled mount I designed and built called the STU (Scope Transport Unit).
It was too much work to move the big scopes out to the viewing sites and hooking up all the wires. So I got out of the "business" by the end of the decade after many good viewing sessions. A few years later I purchased the Celestron 8" Schmidt Cassegrain. It has a 2032 mm focal length and a res of 0.68 arcsecs, shy of Big Sur's .0.231. Shown below is the scope on its original clock drive mount. I had to rebuild the mount so I could take full control of the scope and lock on targets with ease. (See photo).
8" f/10 and original mountIn 2011 I purchased the Ford van I named the "Moon Buggy" and loaded it up with a system that is more like Big Sur. I purchased two new cameras which use NO eyepieces but have the power of 5 & 6 mm eyepieces. One is the LPI which is a CMOS chip camera (complementary metal-oxide semiconductor); the other is the SSI (Solar System Imaging) camera which is a superior CCD (charge coupled device), but both use the scope at prime focus, no eyepiece, precisely what was used at Big Sur. The image passes through the telescope and is focused directly onto (and OVER) the chip. The imaging is superb and the contrast is fantastic. With the new cameras there is no terminator flash and the dark side is VERY black, perfect for the anomalistic flash of a meteorite impact. Both cameras have adjustable frame rates but are kept at 30 frames per second images being processed by two separate computers, then through two digital to analog converters (to 640/480) so they can be observed on separate composite monitors and recorded (with audio & video time dubbing from Fort Collins , Colorado) on VHS AND DVR's. If we see it, we get it. We can play the recordings back and/or analyze them, frame grab selected images, present them to the interested world. And we know when any image (or anomaly) occurred.
The "Moon Buggy" van rear view
Left rear side of the "Moon Buggy"
Part 2 - The Relevance to Big Sur
What is relevant about all this is we have been using equipment of various types for over 15 years. And what we were capable of doing on a shoestring budget in the early years with a 16" or an 8" is nothing when compared to a government funded state-of-the-art system of the Boston University 24" scope at Big Sur. But for the sake of the scoffers, we'll use our smaller 8" f/10 for comparisons with the Big Sur B. U. equipment.
Videotape was not practical or available for Big Sur in 1964. The method used to document the warhead test was a kinescope. Typically, the term can refer to the process itself, the equipment used for the procedure was a 16 mm or 35 mm movie film camera mounted in front of a video monitor, and synchronized to the monitor's scanning rate, or a film made using the process. Kinescopes were the only practical way to preserve live television broadcasts prior to the introduction of videotape in 1956. The term originally referred to the cathode ray tube used in television receivers, as named by inventor Vladimir K. Zworykin in 1929. Hence, the recordings were known in full as kinescope films. RCA was granted a trademark for the term (for its cathode ray tube) in 1932; it voluntarily released the term to the public domain in 1950. Kingston George said 35 mm was used at Big Sur.
So, what could they see and what could they record?
At this point we are not sure if it was possible for the Big Sur Image-Orthicon camera to have had 875 lines of resolution. But Kingston George even mentioned a thousand-line camera! The distance to the horizon at 4000 feet elevation (the height of the Big Sur equipment site) was calculated at 77 miles. They were looking south/southwest with the launch moving from East to West (L to R). They should had seen the missle at about 1300 feet on the horizon as it rose upward and must have had a good view over the ocean looking southwest.
Dr. Jacobs had earlier estimated that the UFO made its appearance more than a minute later, after the warhead itself separated from the nosecone as "we neared the end of the camera run." Published Atlas launch data indicate that the nosecone-separation event occurs at 5.3 minutes (T+320 seconds), at which point the nosecone package is 475 nautical miles downrange, and 200 nautical miles in altitude. Using the above info of altitude 200 and range 500 miles, the distance to the missle would be about 538 nm.
Now, let's put our 8" folded Schmidt Cassegrain up on Big Sur in 1964. What would we be able to see? Remember, they used a 24" scope. I'm assuming their resolution was the normal 575 lines interlaced, rather than the unconfirmed 875. Our cameras have less resolution but they give us a good 400 lines for all practical purposes. The better cameras were color cams and had larger chips (1/2") so the power produced was actually less and we stayed with 1/3 inch chips until the Celestron NexImage Solar System Imager was purchased. It has a 1/4" chip. Let's also get out of the murky midwest where we still operate very well looking at targets 240,000 miles out (Lunar features), and move to clearer skies, up around 4,000'.
First of all, this wasn't a test to see if they could track a missile. The space program had been doing this for years. They were watching and filming a warhead separation at 500 miles. And the Air Force claims it worked, many times. They just don't admit anything about an anomalous event. They flatly deny it and report successful the warhead spashdown.
The Atlas missile is about 10' thick, not including the side boosters, and the length was 75' 10" or 85' 6" in the ICBM mode. Waskiewicz, illustrates in his report just what the missile would have looked like on the monitor. We're told that all they saw were points of light. But calculations show that our 8" f/10 would show a well-lit target, 30' in diameter (the UFO) at 500 miles, and its apparent distance (at 400 power) will be at a simulated 1.25 miles!!!
Let's try two experiments.
The 30' diameter of many UFOs of a certain type has been encountered by a number of researchers, including myself during 50 years of investigation and research. There's a place I visit in S. Illinois that always caught my attention with my interest in UFOs. My son and his family live at Norris City. As one approaches this little country town, and just before you cross Hwy 45 from Rt. 1, you can see very plainly, a water tower in the distance. That 30' water tower is from 1-2 miles from the driver. There is no mistake about it. Without any optical aids you can see the doughnut shape and in the evening the shadow on one side from the lowering sun. You can see detail. But in the image below it is harder to make out as it just clears the tree line and building (center). This image reduction factor has always impressed me, as in the case of the Utah and Montana films, what the witnesses reported and what the camera recorded. Remember, even the Moon is only a half a degree wide, yet one can see lots of detail with the unaided eye. If one would enlarge the photo below on a standard home movie projection screen, it's almost as good as being there. But click on the enlargement link below and move closer to your monitor screen. This isn't like what the Big Sur crew saw during the event. This is what Jacobs and Mannsman saw, an enlarged projected image. And the simrange is the same: 1-2 miles!!!
Water Tower at 1-2 miles / What Big Sur could see at 500 miles
One can argue seeing conditions in most places on the Earth, but Big Sur's altitude was picked to observe and record an Atlas missile warhead test, and more than once. Just how good would the picture have been?
Jacobs mentioned that the large monitor was about 20" wide. CRTs in monitors that large have been about 17" measured diagonally and kinescoped with 16 or 35 mm film. The picture area of standard 16 mm is of 7.49 mm by 10.26 mm and has an aspect ratio of close to 1.33. But this is considered home movie quality and I seriously doubt the Big Sur 1964 technology would have used it. It would have worked well enough for this argument but I believe 35 mm was most probably used and consists of strips about 1& 3/8 inches in width. Somebody had to be looking through a viewfinder to center the camera on the target and others watched on the tracking scopes, but at least the kinescope was filming the CRT (cathode ray tube). Everything on that screen was recorded and later viewed by analysts. Projected on the movie screen by Mansmann, the width of that frame would have been blown up on a 48" beaded screen 35 times!!! Or viewed with a microscope or jeweler's loup would have produced fantastic results. No one denies that they were able to see the objects on the film. The only question was, how good? But the Air Force denies that you could see any more than just points of light.
The BU telescope at Big Sir had a 24" mirror with a resolution of 0.231 arcsecs. Our current scanning unit is an 8" f10 with res of 0.68 arcsecs. As I mentioned before we had a 16" f4.5 and later a 10" f/6, but our small Schmidt Cassegrain is a mighty mouse of a system when used with the new cameras of today. It is how we USE the system that produces the power we have, which puts us in a simrange of 600 miles from the lunar surface. The Celestron Solar System Imaging Camera is equivalent to a 5 mm eyepiece.
2032 mm/5 mm = 406X
There are four monitors in the van. Below you can see the Quad Monitor #1 which displays the finder (Camera 1), the crew compartment camera (Camera 2), the VMA graphics showing the actual moon phase (Camera 3), and finally, what Camera 4 is seeing. Here Camera 4 is seeing the crater Eratosthenes and the Apennine Mountain range.
Quad Monitor view (Monitor #1 in van)
Monitor #3 in the van displays the full Camera 4 view right from the computer. I chose the image below, however, because it is pertinent to this discussion. On this monitor and the DVR recording there is no video time dub, the time fix is established from the audio track of WWV at Fort Collins, Colorado.
Straddling Sections 23 & 24
If you look at the right side of the above image, right in the smooth area of the asteroid impact area, Sea of Serenity, you'll see the 10 mile crater Bessel. Take a good look. This is a 400, 600 mile simrange image and the Moon is 240,000 miles away. The domed saucer reported in the Big Sur film would have been (if 30' and 500 miles) 14,400' wide extrapolated to lunar range, or a little more than 1/4th the size of Bessel shown above, no matter how projected. Not very impressive with our 8".
30/500 = x/240000 or 14,400' or 2.7 miles wide at the distance of the Moon
What this tells us is that at least our little 8" couldn't resolve a 30' disc very well at 500 miles or under under OUR seeing conditions. If we had the same seeing conditions as Big Sur, and the scope was the 24" and 2400" f/l, the image would be much larger and clearer. With the objective mirror being three times larger, and the ability to slide the adaptor out (or use a Barlow lens to double or triple the image) the Big Sir should have been able to see much more than we. If we would have used our adjustable T-C adaptor and our older GBC-400 camera we could have pumped Bessel up to 700-800x and the ("saucer") the bright 17-mile crater at 7 o'clock from Bessel, Menelaus, riding on the wall of the huge impact basin would be a good graphic representation of what Big Sir would have seen. Of course the Atlas missile was much longer.
1) At least two qualified men saw what they reported; Jacobs and Mansmann..
2) The system used to image the missile was capable of doing just what the men stated.
3) The missile was high enough and the sky was clear enough to facilitate item #2.
4) The missile either moved directly away from the B.U. scope or, moved at right angles to the scope, or a combination of the above. In order to accommodate the latter calculations, trigonometry would be needed, but the results would be better than the first option, which is a worse case scenario. However, it is just that scenario that I used to recreate the imaging capability.
What the B.U. scope used to record the images is not known to this researcher, except that it may have been a commercial TV grade orthicon, or better yet, a special one designed for a special purpose, as suggested by Jacobs when he mentioned a "1,000 line" system. Even if conventional, the facts remain that the system worked. The optical segment of the device was a folded Gregorian telescope with a 24- inch diameter objective mirror and a 240-inch focal length, three times longer than our 8" f/10 and an objective three times larger. A set of Barlow extenders could yield effective focal lengths of from 480 to 2,400 inches! That's one hell of a telescope even if used within the confines of the Earth's polluted atmosphere. If mounted in space it would be a small "Hubble". But at 4,000', and above 80% of the polluted air, the images would be fantastic with a missile at 60 miles up.
Part 3 - The Testimony
I will be using the testimony of three people, the only ones to come forward who really knew anything about the Big Sur incident:
1) RJ, Robert Jacobs, the Officer in Charge of Optical Instrumentation (Ref 1)
2) FM, Florenz Mansmann, the Commanding Officer (Ref 2)
3) KG, Kingston George, the Project Engineer, Operations Analyst for Headquarters, 1st Strategic Aerospace Division
The magnification of the B.U. was truly impressive. The exhaust nozzles and lower third of the Atlas missile literally filled the frame at this distance of over 100 nautical miles. With one tracking mount and one on elevation working completely manually, it was not easy to keep the image centered in the early stages of flight. As the nose cone package approached T + 400 seconds, sufficient angle of view had been established that we were literally locked down with the whole in-flight package centered in the frame. No one on the site was watching the screen by this point.
Nine of these missions would be photographed through a major portion of powered flight by both the B.U. Telescope operating with effective focal lengths ranging from 1200 inches to an average of 720 inches, and with the conventional cameras and shorter lenses of the 1369th's M-45 mount. Jacobs even mentioned a potential of 1,000 lines for the camera which is almost twice as effective as commercial television, maybe even three times, because many used 360 lines in those days. But this factor has not been confirmed.
Since we are dealing with an alleged destruction of a dummy warhead by a UFO, I think it is important to mention at the outset that the Big Sur/B.U. Telescope project was set up to find out why our ballistic missiles were blowing up.
The objective was to collect low-light-level photography of missile launches into the Air Force Western Test Range from Vandenberg Air Force Base, situated a little over 100 miles to the south. The Big Sur angle presents a unique side-look during test launches, and paper studies convinced some of us that photo data from that location could be of significant value. Local telephoto-lens coverage from Vandenberg AFB is often obscured by the prevailing fog, while the special telescope could be placed at 4,000-feet altitude. Nine of eleven launches from Vandenberg were successfully covered during the three-month deployment. (Ref 3)
Florenz Mansmann confirmed many times, and stuck to his story until his death, that Jacobs DID see the films and that he had called him into his office to view them. Mansmann also stated that Kingston George was never in attendance at any of the showings.
The project was remarkably successful. Soon after we returned the borrowed instrument, a long-term plan was started for a permanent site. An up-to-date telescope is operated today in the Big Sur area by the Western Test Range’s successor, the 30th Space Wing of the Air Force Space Command.
The immediate success of the 1964 project led to a serious problem; we not only could see and gather data on the missile anomalies as hoped, but we also were viewing details of warhead separation and decoy deployment that were considered by the Air Force to be highly classified.
Such was the case during an Atlas launch nicknamed "Buzzing Bee" before sunup on September 22, 1964. On the TV screen, we watched the Atlas climb into the sunlight and shed its booster engine section about two minutes after launch. The sustainer engine shut down some two and half minutes after that, all normal for the Atlas, and we could still see the missile tankage against the dark, starry sky. And then, astonishingly, we saw a momentary puff of an exhaust plume, bright enough to "bloom" on the television monitor, and an object separated from the tank -- the reentry vehicle (RV) was released to follow its own trajectory to the target area. This was followed by two smaller puffs that also bloomed on the monitor, and then two groups of three objects became distinct from the sustainer tank and the RV.
Disregard, for a moment, Kingston George's reference to the September 22, 1964 launch and "before sunup", and confirming that from over a 100 miles the optical system was imaging the booster engine section about two minutes after launch! And later the sustainer tank and three smaller objects. No large object , coming into the frame or otherwise, is described.
KG:Both confirming that it was a side view, more or less, rather than a rear view, and that the seeing conditions were much better than most people can imagine. With this view it means the tracking of the launch wasn't a 100 mile to 1,000 mile degrading imaging, but a panning scan from left to right with ranges from 100 miles to much less than the 1,000 mile maximum. But even with the distorted figures I intend to show that the system, and the elevation of the camera above 80% of the atmospheric pollution, should have been able to produce more than adequate imaging of the reported object and the much smaller warhead.
......we had never had a direct view of it before. The Eastern Test Range people who operated the B.U. Scope for us had never seen views like this either, mainly because the telescope was situated to look "up the tail" of the launches on the East Coast. Also, images are seriously degraded by the light passing through a great deal more atmosphere than on our 4,000-foot mountain.
What George said they wanted was a side look at all stages of powered flight. This side-look was not possible from anyplace on the base. Because of the tortured California coastline, such a view was possible from one spot. Big Sur.
This graphic is what Kingston George says the monitor image THEY saw looked like. A kinescope of this would blow up the graphic implied to have been about 17" diagonally to several feet on a movie projection screen! Assuming the res at 4,000 and the scope used, the image above is a crude example of what they would have seen, as far as detail is concerned. Look at images of the stage separations from the Apollo era just few years later.
The image of the warhead, even if viewed exactly side-on, would be less than six-thousandths of an inch long on the image orthicon face, or between two and three scan lines. We could not resolve an image of the warhead under these conditions; what is detected is the specular reflection of sunlight: as though caught by a mirror.
Kingston George's carefully-worded statement, "six thousandths of an inch long ON THE IMAGE ORTHICON FACE", is not the same as size on the viewing CRT (Cathode Ray Tube).
The arrangement of the electrodes of the Image Orthicon.
The end window envelope at the top is 76 mm in diameter, and excluding the base pins is 370 mm tall. The neck is 51 mm in diameter and 275 mm long. The head unit is 66 mm long excluding pins. Type 5655 was first introduced in 1949. The sizes here may vary with the one used at Big Sur but the end window and the orthicon chip is 1/3rd of an inch, just like the one we first used. The image Kingston George is referring to was projected on this 1/3rd inch surface and displayed on the monitor which was filmed with a motion piture camera.
As you can see, an image projected on the IO tube face translates into considerable magnification. Our HPS unit projects an image of the entire Moon several inches in diameter OVER the CCD chip which is 1/4" wide. That portion of the Moon outlined by the chip fills our monitor giving amazing magnification simulating 600 miles range from the lunar surface at an actual distance of 240,000 miles! The image of the WARHEAD, which was only 2'7.5" to 2'0" in diameter (4' at adaptor) was the subject of the small image, not the object reported by Jacobs and Mansmann.
As anyone who has followed this incident knows, I have had trouble fixing the exact date of the launch. This is not deliberate on my part, but simply a matter of inexact records. The launch in question may have happened on September 22nd. It may be the same one Kingston George describes. But there are discrepancies in his memory and mine. First, the launch we photographed was NOT predawn. It was broad daylight. The radar chaff, a cloud of debris, was part of the package. The object which flew into the frame was a solid craft, saucer shaped, and not a cloud of debris. George may be talking about another incident entirely since, according to my records, the most probable dates were either September 2, 3 or 15, 1964. If the date was September 22nd, then he is not discussing the same portion of the flight which I am.
From a letter to Lee Graham dated 1-30-1983:
The Enquirer story is true except the year was 1964 not 1965. The camera system we used was capable of 'nuts and bolts' focus from a point seventy miles from any object being tracked so the photos were readable.
From a letter to Peter Bons dated March 8, 1983:
.....Telescopic photography of that magnitude makes sizes indeterminable. We knew the missile size but could not compare since we did not know how far from the missile the 'object' was at time of beam release.
......From clarity, action and situation in the film, the assumption was, at that time, extraterrestrial. Details would be sketchy and from memory, the shape was classic disc, the center seemed to be a raised bubble, not sure any ports or slits could be seen but was stationary, or moving slightly- floating over the entire lower saucer shape, which was glowing and 'seemed' to be rotating slowly. At the point of beam release- if it was a beam, it, the object, turned like an object required to be in a position to fire from a platform--- but again, this could be my own assumption from being in aerial combat...
To my satisfaction, whether the reported incident happened or not, the first question has been answered. Several of my colleagues and myself, have researched the optical system and methods used at Big Sur and have all come to the same conclusion. Not only could they "see" what was reported, but that's what the Big Sur weapons test was all about. It worked. Others have claimed they can prove that it did not, but they have not provided their report, let alone the proof. Now, what did Jacobs and Mannsman see?