.

.
Library of Professor Richard A. Macksey in Baltimore

POSTS BY SUBJECT

Labels

Monday, May 7, 2012

911 - Underground Nuclear Power Plants

Underground Nuclear Power Plants
It may not be widely known that several Underground Nuclear Power Plants (UNPPs) have
been operated since the early 1960s.

In Western Europe, 4 small Underground NPPs have been operated at Halden (Norway,
1960s); Agesta (Sweden, 1957); Chooz (France, 1960s); and Lucens (Switzerland, 1962).

Construction of the Swedish Agesta complex started in 1957 and operations started in
1964. The underground reactor was a small CHP system (for a Stockholm district),
producing only 10MW of electricity and some 70MW of thermal energy for district
heating. It is believed it was also used for military plutonium production.

Construction of the Swiss Lucens reactor started in 1962 and it went live in 1966. This was
also a small reactor producing some 8MW of electricity. It was built in an underground
cavern and experienced a core meltdown in 1969.

In the Soviet Union, the AD-2 underground reactor was commissioned in 1964 to provide
combined heat and power (CHP) to the city of Zheleznogorsk. It is still operational and was
also used to produce weapons grade plutonium. Russia have recently been studying plans
to build more underground NPPs using small "mini" reactors based on naval technology.

The report "Underground Nuclear Power Plant Siting" by The Aerospace Corporation and
California Institute of Technology (1972) analysed different potential underground NPP
configurations and scenarios with a view "toward novel approaches to siting plants within
the State of California". 4 potential underground sites on the California coast were listed.
One of the stated advantages of underground siting was the "reduced population-distance
requirements" - i.e. an underground NPP could be situated much closer to major
population centres due to the improved containment.


It was stated that underground NPP construction was feasible because of the European
experience with 4 reactors and existing experience with large underground excavations
for hydroelectric facilities.
"The most apparent advantage for underground power plant siting is improved
containment".

"The separation distance from the plant to population centres might well be reduced from
the 10-20 miles characteristic of comparable surface plants to a small localised area".
In other words, it was seen as quite feasible to locate underground NPPs within a city
limits.

Several nuclear reactors worldwide are used not just for electrical power generation but
also for district heating - Combined Heat and Power - using the steam generated from the
NPP for heating and air conditioning (via steam chillers). CHP is desirable due to its higher
efficiency and better utilisation of power plant thermal energy.

New York has one of the largest district heating systems in the world. Starting in 1882, the
ConEdison system covers much of Manhattan (including 7 WTC) from the southern tip to
75th Street. It would have made sense for the UNPPs under the WTC to have been
integrated into the district steam heating system to improve their efficiency and to
disguise the reactor cooling system.

In his 7th August 2007 testimony to New York City Council, the President of the
International District Energy Association spoke of how
 "district energy recycles and
reuses the heat that is produced during generation of electricity. Standard power plants
effectively convert only about 33-36 percent of the fuel they burn into electricity. Nearly
two-thirds of the fuel used in the electricity production process ends up being rejected or
"wasted" up the smokestack, in cooling towers or exhausted to rivers, lakes and oceans.
Combined heat and power recycles this waste heat and uses it to heat buildings in a
surrounding area through a district energy system. Combined heat and power is most
feasible when there is an area near the plant that has a need for the heat – a downtown
area......"
Integration of the UNPPs into the ConEdison system would eliminate the problem of
having to discharge cooling water into the Hudson, which would lead to increased river
water temperatures, a possible radiation signature and could lead to discovery of the
NPPs. In addition, if there were or are "Manhattan Project" underground military facilities
being powered by the UNPPs, these would also require heating and cooling. Integration
into the pre-existing district energy system would be the simplest approach.

It can be seen that UNPPs had already been built and operated before construction of the
WTC commenced. The Swedish Agesta reactor could be a blueprint - a small underground
nuclear reactor used for CHP for a major city and for military plutonium production.

Where else would the military place at least some critical nuclear weapons material
facilities at the height of the Cold War other than underground? What other electrical
power generating technology would be used to provide long term power for underground
military infrastructure during and after a nuclear war?.

911 - The China Syndrome

The China Syndrome
Pictures of the Reactor Containment Structure22 September 2008
In what may be the most stunning direct evidence yet revealed for the occurrence of "The
China Syndrome" under the WTC, press articles have appeared in recent days concerning
the discovery of a "pothole" 40 feet deep in the bedrock under the site of the WTC. It was
uncovered during excavations in the South East quadrant of the site for the new "Freedom
Tower".

In August 2008 it was reported (Daily Mail, 25/08/08) that work was going on 20 hours a
day in the "East Bathtub" to prepare the foundations for the Freedom Tower. Concrete was
being poured down a navy blue funnel from the street into the bathtub. However, in the
panoramic view above there is no sign that construction of foundations for the Freedom
Tower has commenced though perhaps this is out of sight to one side. In fact, reports
commented that this "pothole" was exposed as the overlying topsoil was being removed to
reach the bedrock before construction of the Freedom Tower foundations can commence.

According to the press reports the top of the "steel grey" bedrock is 70 feet below ground
level and the "pothole" extends another 40 feet down. The "explanation" for this pothole is
that it was formed thousands of years ago by the geological processes of Ice Age glaciers.

NY Times, 21/09/08: The senior geologist of the consulting engineers Mueser Rutledge in
charge of the project, Cheryl Moss, is quoted as saying:

"There are areas in local parks that have small vertical potholes exposed but I’m not aware
of anything in the city with a whole, self-contained depression on this scale.”

Shown photographs of the rocks, Sidney Horenstein, a geologist and environmental
educator emeritus at the American Museum of Natural History, said, “You don’t find such
an array of rock types in the few places in the city that the glacial deposits are exposed.”

A closer view of the hole:

If one looks at the left hand wall of the hole, there appears to be a large space between the
central grey, very flat bottom of the hole and the smooth, vertical left hand wall. It appears
that the hole may extend further down in this area.

The central grey area at the bottom of the circular hole looks just like freshly poured
concrete. The colour is identical. Is this not a large concrete plug filling the bottom of the
hole? The metal structure at the bottom edge of the picture looks identical to construction
shuttering or formwork, used to keep poured concrete in place while it sets.

What was the function of the large steel vertical pipe set into the side of the hole? The end
of a smaller steel pipe can be seen just below it.

This photograph is of a construction worker in the hole, showing the morphology of the
side wall.

Is this smooth flowing morphology consistent with resolidification of molten rock?

Remember, we know unequivocally from the aerosol analysis carried out by Prof. Cahill,
that for over 6 weeks the temperatures under the WTC were so high that soil and glass were
being evaporated - boiled away. (The aerosol also indicated the evaporation of stainless
steel). This means the temperature must have been well over 2000 degrees centigrade.
These are the levels of volcanic temperatures encountered during the core meltdown of a
nuclear reactor, directly witnessed at Chernobyl, in which the molten uranium reactor
core, fuelled by the heat of its own radioactive decay, melts its way down through its
stainless steel pressure vessel and then the concrete bioshield. Uranium is extremely
dense and so will fall to the bottom of the molten rock pocket that it creates, further
melting the bedrock and so on. Under the force of gravity the molten mass of uranium will
then continue to melt its way on down through the Earth like a tunneling machine - the
China Syndrome.

This large circular hole in the bedrock is consistent with the space that would have been
required to house the reactor containment structure. The cylindrical stainless steel
pressure vessel holding the core of the reactor would have been of relatively limited
diameter, maybe 5 metres in diameter for a small reactor, 10 meters for a large one. These
pressure vessels are housed in a tightly fitting concrete reactor pit, which prevents the
steel vessel from expanding thermally and rupturing. Situated around the cylindrical
reactor core is a much larger reactor containment structure or bioshield, constructed of
one (or more) concentric very thick concrete cylinders. See the designs proposed in 1972
for Underground NPPs in California.

The photograph below shows the interior of a cylindrical reactor containment structure.
The red arrow is pointing at the reactor pit where the core would be installed.


The following photograph of an above ground nuclear power plant shows the exterior of
the reactor containment building very clearly - the cylindrical domed structure
dominating the complex.

It can be seen that in installing nuclear reactors under the Twin Towers, a large cylindrical
hole would have to be excavated out of the bedrock to house the reactor chamber itself. It
would have been argued that the bedrock itself provided an excellent containment
structure, except for the relatively weak concrete roof of course. Further excavations
would be required for the cooling system and electrical generation system plus all the
other auxiliary services. In the foreground (bottom left hand corner) of the photograph of
the hole in the bedrock, we can see that the metal shuttering divides the round hole from a
further deep excavation that leads away to the bottom left. The left hand wall in this area is
very smooth, vertical and the colour of old concrete - it is clearly not a natural formation.
This area may have formed the access passageway to the reactor hall for personnel and
auxiliary systems.

The following photograph of the Japanese Monju power station shows another similar
cylindrical containment building design.

At Chernobyl, a concrete "sarcophagus" was put in place over the wrecked reactor
building. One would expect a similar procedure to have been effected at the WTC. The best
one could do would be to fill in the hole on top of the core once it had burrowed down
sufficiently for protected workers or robots to access the surface. This would certainly be
one reason to pump in concrete 24 hours a day and there is no evidence of ordinary
building foundations.

The reports also state that thousands of smooth cobbles were found around the hole,
coloured red, purple and green. Rhyolite cobbles formed by igneous processes are usually
red-purple in colour.

The NY Times articles comments: "Along the east side of the pothole, the rock layers run
vertically (my emphasis)— not horizontally. The result, where the surface has been carved
away in a concave form, is an abstract canvas of swirling, concentric rings; not unlike a
gouge in a wall that reveals many layers of old paint."

On P160 of my report, I state that the excavations for the WTC foundations were officially
27 metres deep (88 feet). The surface of the bedrock itself is 70 feet below the surface. The
47 central steel box columns were set into the bedrock during construction in the early
1970s to anchor the towers. Under the WTC plaza there were seven basement layers which
at a reasonable estimate of 10 feet each reached the 70 feet down to the bedrock. This
known construction is not consistent with the presence of a 20,000 year old hole some 40
feet deep and over 80 feet wide in the bedrock itself under the towers or plaza - the
foundations or basement levels would have encountered it and been compromised.

A report by the Union of Concerned Scientists "US Nuclear Plants in the 21st Century: The
Risk of a Lifetime" was published in 2004. The first subject the report covers is The Bathtub
Curve. This is the well known engineering reliability curve which shows high risk of failure
in a system early in its life (infant mortality) and near the end of its life (EOL), with lower
risk during the main operating period.

It is of course also well known that during construction of the WTC, a large excavation
known as The Bathtub was dug to house the basement structures under ground level.







911 - Polls 2011


(09/07/11) -

Americans Expect New Attack Similar to 9/11 in Their Lifetimes

Most respondents agree with the 9/11 Commission and reject the notion that a controlled demolition took place in the World Trade Center.
A majority of Americans believe that a terrorist attack similar in scope and magnitude to 9/11 will take place again in U.S. soil, a new Angus Reid Public Opinion poll has found.
The online survey of a representative sample of 1,787 American adults also shows that respondents are divided in the effectiveness of the military intervention in Afghanistan that was launched by the United States government after the events of 9/11.
9/11
Two thirds of respondents (66%) believe that the commission that investigated the events of Sept. 11, 2001 was right in its conclusion that an attack was carried out by 19 hijackers who were members of the al-Qaeda terrorist organization, led by Osama bin Laden.
Only 12 per cent of respondents openly disagree with the conclusion of the 9/11 Commission, and 22 per cent are undecided.
A small proportion of Americans find several assertions that have been made about 9/11 as credible, including the notion that United Airlines Flight 93, which crashed in Pennsylvania, was shot down (16%), that the collapse of the World Trade Center was the result of a controlled demolition (14%), and that no airplane actually crashed at the Pentagon on Sept. 11 (11%).
Even fewer respondents believe that Osama bin Laden is alive (9%) and that no airplanes crashed on the World Trade Center on 9/11 (5%).
Terrorism
More than half of Americans (58%) think that an attack similar in scope and magnitude to 9/11 will take place in the United States again in their lifetimes. Republicans (66%) are more likely than Independents (59%) and Democrats (52%) to feel this way.
More than a third of Americans (36%) are “very concerned” or “moderately concerned” about becoming the victim of a terrorist attack, while three-in-five (60%) are” not too concerned” or “not concerned at all.” Democrats (43%) are more worried about this possibility than Republicans (36%) or Independents (33%).
Americans are divided in their assessment of the military intervention that was launched in Afghanistan as a result of 9/11, with 44 per cent considering a success and 36 per cent deeming it a failure. Republicans are more likely to see the military campaign as a success (48%) than Democrats and Independents (both at 33%).
Analysis
There has been little change in the views of Americans on the 9/11 attacks since the survey conducted by Angus Reid Public Opinion in March 2010 after Iranian President Mahmoud Ahmadinejad claimed that the 9/11 attacks were a “fabrication”. The core group of Americans who question certain elements of the official story—including the conclusions of the 9/11 commission and the notion that “many people” in the U.S government had prior knowledge of the plot—does not reach one-in-six respondents.
As the military intervention in Afghanistan draws to a close, the public is clearly divided. Republicans are more likely to say that the war was a success, while almost half of Democrats and Independents claim it was a failure.
-------------------------------------------



(03/21/10) -

Most Americans Reject 9/11 Conspiracy Theories

(Angus Reid Global Monitor) – Few people in the United States agree with some of the allegations that have been made in relation to the events of 9/11, according to a poll by Angus Reid Public Opinion. Only 15 per cent of respondents think claims that the collapse of the World Trade Center was the result of a controlled demolition are credible.
(Angus Reid Global Monitor) – Few people in the United States agree with some of the allegations that have been made in relation to the events of 9/11, according to a poll by Angus Reid Public Opinion. Only 15 per cent of respondents think claims that the collapse of the World Trade Center was the result of a controlled demolition are credible.
In addition, 15 per cent think United Airlines Flight 93 was shot down, 13 per cent believe no airplane actually crashed at the Pentagon, and six per cent agree with the claim that no airplanes crashed into the World Trade Center and that the images seen on television were altered.
Al-Qaeda operatives hijacked and crashed four airplanes in the U.S. on Sept. 11, 2001, killing nearly 3,000 people. In July 2004, the federal commission that investigated the events of 9/11 concluded that "none of the measures adopted by the U.S. government from 1998 to 2001 disturbed or even delayed the progress of the al-Qaeda plot" and pointed out government failures of "imagination, policy, capabilities, and management."
In October 2001, U.S. president George W. Bush ordered the invasion of Afghanistan, claiming that there would be "no distinction between the terrorists who committed these acts and those who harbour them." The conflict began in October 2001, after the Taliban regime refused to hand over al-Qaeda leader Osama bin Laden without evidence of his participation in the 9/11 terrorist attacks in New York and Washington.
Earlier this month, U.S. attorney general Eric Holder discussed the fate of bin Laden, saying, "The possibility of capturing him alive is infinitesimal. He will be killed by us, or he will be killed by his own people so that he is not captured by us. We know that. (…) The possibility [of capture] simply does not exist."
Polling Data
Many things have been said and written about the events of 9/11. For each of the following statements, please say whether you deem them credible or not credible.
 
Credible
Not credible
Not sure
The collapse of the World Trade Center was the result of a controlled demolition
15%
74%
11%
United Airlines Flight 93, which crashed in Pennsylvania, was shot down
15%
62%
22%
No airplane actually crashed at the Pentagon on Sept. 11
13%
76%
11%
No airplanes crashed into the World Trade Center—the images seen on television were altered
6%
87%
7%
Source: Angus Reid Public Opinion 
Methodology: Online interviews with 1,007 American adults, conducted on Mar. 9 and Mar. 10, 2010. Margin of error is 3.1 per cent.
----------------------------------------------

911 - Cloaking technology - Hologram technology


Cloaking technology. 
(Having an aircraft that is or looks like an American Airlines Boeing 757 fly by the scene for witnesses to see, then cloak it right before something else hits the Pentagon and then the cloaked plane flies over the building.  Also cloaking the real object that did crash into the Pentagon.)

► "A scientist at Tokyo University has developed a coat which makes those who wear it appear invisible.
"We have a camera behind the person wearing the coat," Mr Tachi told the BBC.
The image from the camera is then projected onto the coat, so that the wearer appears virtually transparent when seen through a viewfinder.
Beforehand "it looks like a grey coat," Mr Tachi said. "But when we project the image onto it we can see a very clear picture of what is projected."
The real purpose of the new technology is not to make a person appear see-through, however, but to augment reality, Mr Tachi said.
"If we paint a wall, then we can see behind it," Mr Tachi said. "Even if there is no window in the room, we can see the scenery outside."
The technology may also be useful for pilots, to make the floors of their cockpits appear transparent for landing." -BBC (02/18/03)

► THE INVISIBLE MAN
"Harry Potter isn't the only academic with an invisibility cloak. A professor at the University of Tokyo has created an optical camouflage system that makes anyone wearing a special reflective material seem to disappear. Here's how: a video camera records the real-life scenery behind the subject, transmits that image to a front-mounted projector, which then displays the scene on the reflective material. The system has obvious military applications and could also be used in airplane cockpits to make landings easier for pilots." -TIME (2003)

► "MiG Plasma Cloaking Device to Take Off Soon
A NEW Russian MiG fighter that uses a "Star Trek"-style plasma cloaking device to hide from enemy radar and missiles is due to make its first flight any day.
The stealth device weighs under 100kg and can be fitted to any aircraft. It surrounds the plane with a cloud of plasma or electrically charged gas, rendering it invisible to enemy radar, say its makers. " -Telegraph (10/06/99) [Reprinted at: Top Secret Projects]

► "Sensor-and-display systems would create illusions of transparency.
Lightweight optoelectronic systems built around advanced image sensors and display panels have been proposed for making selected objects appear nearly transparent and thus effectively invisible. These systems are denoted "adaptive camouflage" because unlike traditional camouflage, they would generate displays that would change in response to changing scenes and lighting conditions." -NASA's Jet Propulsion Laboratory (08/00)

► "Active camouflage (or adaptive camouflage) is a group of camouflage technologies which would allow an object (usually military in nature) to blend into its surroundings by use of panels or coatings capable of changing color or luminosity. Active camouflage can be seen as having the potential to become the perfection of the art of camouflaging things from visual detection.
Theoretically, active camouflage should differ from more conventional means of concealment in two important ways. First but less importantly it should replace the appearance of what is being masked with an appearance that is not simply similar to the surroundings (like in conventional camouflage) but with an exact representation of what is behind the masked object. Second and more importantly, active camouflage should also do so in real time. Ideally active camoflage would not only mimic nearby objects but also distant ones, potentially as far as the horizon, creating perfect visual concealment. In principle, the effect should be similar to looking through a pane of glass making that which is hidden perfectly invisible.
This technology is poised to develop at a rapid pace, with the development of organic light-emitting diodes (OLEDs) and other technologies which allow for images to be projected from oddly-shaped surfaces. With the addition of a camera, while not allowing an object to be made completely invisible, theoretically the object might project enough of the background to fool the ability of the human eye or other optical sensors to detect a specific location." -Answers.com

► Now you see it, now you won't: Boeing lifts the veil on stealthy Bird of Prey
"Boeing's Bird of Prey technology demonstrator, unveiled in St Louis on 18 October after spending a decade in the world of 'black' or classified programs, is a very important step in stealth technology, combining a very low radar cross-section (RCS) with a renewed focus on visual and even acoustic signatures. The overall goal, confirmed by officials at the event, is to achieve daylight stealth.
On the record, officials said only that the program's purpose was to test 'specific' and 'breakthrough' stealth technologies, along with the rapid-prototyping techniques developed by the Phantom Works. The pilots who carried out the unusually slow-paced flight test program – 38 missions between late 1996 and 1999, barely more than a sortie per month – were identified, but the engineers who ran the program were not." -Janes.com

► "United States Patent: 5,307,162
Cloaking system using optoelectronically controlled camouflage 
The Cloaking System is designed to operate in the visible light spectrum, utilizes optoelectronics and/or photonic components to conceal an object within it, and employs analog or digital control feedback resulting in camouflage adaptable to a changing background. The system effectively conceals either a still or moving object from view by the interposing of a shield between an observer and the object and recreating a full color synthetic image of the background on the shield for viewing by observer, thus creating the illusion of transparency of both the object and the Cloaking System. This system consists of four major elements: a sensor; a signal processor; a shield; and a means of interconnecting, supporting, and safely enclosing the aforementioned elements along with the concealed object.
Appl. No.: 977192
Filed: November 16, 1992" -United States Patent and Trademark Office

"United States Patent: 6,333,726
Orthogonal projection concealment apparatus
A pixel array-based orthogonal projection concealment apparatus applicable for continuously matching a mobile platform to its changing background integrates power means, sensing and inputting means for observer and background data, programmed computational means, and pixel array display means in a single apparatus. In its preferred embodiment the concealment projection image is displayed through a liquid crystal array.
Appl. No.: 451721
Filed: December 1, 1999" -United States Patent and Trademark Office

► 'United States Patent Application: 20,020,090,131
The invention described herein represents a significant improvement for the concealment of objects and people. Thousands of light receiving segmented pixels and sending segmented pixels are affixed to the surface of the object to be concealed. Each receiving segmented pixel receives colored light from the background of the object. Each receiving segmented pixel has a lens such that the light incident upon it is segmented to form focal points along a focal curve (or plane) according to the light's incident trajectory. In a first embodiment, this incident light is channeled by fiber optics to the side of the object which is opposite to each respective incident light segment. The light which was incident on a first side of the object traveling at a series of respective trajectories is thus redirected and exits on at least one second side of the object according to its original incident trajectory. In a second embodiment, this incident light is segmented according to trajectory, and detected electronically by photo diodes. It is then electronically reproduced on at least one second side of the object by arrayed LEDs. In this manor, incident light is reproduced as exiting light which mimics trajectory, color, and intensity such that an observer can "see through" the object to the background. In both embodiments, this process is repeated many times, in segmented pixel arrays, such that an observer looking at the object from any perspective actually "sees the background" of the object corresponding to the observer's perspective. The object having thus been rendered "invisible" to the observer.
Filed: October 2, 2001" -United States Patent and Trademark Office

► Future Stealth
"Finally, stealth aircraft were limited to nighttime flying as they could be spotted at ease during the day. This leads to the subsequent task in future stealth aircraft development…the creation of a plane invisible to the eye. Lockheed’s legendary ‘Skunk Works’ experimental arm is known to be developing new electro-chromic materials. Their aim is to create camouflage panels which can change color or tint when subjected to an electrical charge. Other engineers like Boeing and Northrop, are also working on similar stealth technologies.
One of these systems is the "electrochromic polymer" that is being developed at the University of Florida. These thin sheets cover the aircraft’s skin and sense the hue, color and brightness of the surrounding sky and groundThe image received is then projected onto the aircraft’s opposite side. When charged to a certain voltage, these panels undergo color change. Another similar "skin" is being tested at the top-secret Groom Lake facility at Area 51 in Nevada. It is reputed to be composed of an "electro-magnetically conductive polyaniline-based radar-absorbent composite material." The system also utilizes photo-sensitive receptors all over the plane that scan the surrounding area, subsequently the data is interpreted by an onboard computer which outputs it much like a computer screen making the aircraft virtually invisible to site." - Stealth; Low Observable Technology

Hologram technology 
(such as projected over a missile or other aircraft, or maybe projected flying into the Pentagon while bombs inside blew up).

Brief Description
The holographic projector displays a three-dimensional visual image in a desired location, removed from the display generator. The projector can be used for psychological operations and strategic perception management. It is also useful for optical deception and cloaking, providing a momentary distraction when engaging an unsophisticated adversary.
Capabilities
-Precision projection of 3-D visual images into a selected area
-Supports PSYOP and strategic deception management
-Provides deception and cloaking against optical sensors -Air Force/Wayback Machine


► When Seeing and Hearing Isn't Believing
"Most Americans were introduced to the tricks of the digital age in the movie Forrest Gump, when the character played by Tom Hanks appeared to shake hands with President Kennedy.
For Hollywood, it is special effects. For covert operators in the U.S. military and intelligence agencies, it is a weapon of the future.
"Once you can take any kind of information and reduce it into ones and zeros, you can do some pretty interesting things," says Daniel T. Kuehl, chairman of the Information Operations department of the National Defense University in Washington, the military's school for information warfare.
Digital morphing — voice, video, and photo — has come of age, available for use in psychological operations. PSYOPS, as the military calls it, seek to exploit human vulnerabilities in enemy governments, militaries and populations to pursue national and battlefield objectives.
To some, PSYOPS is a backwater military discipline of leaflet dropping and radio propaganda. To a growing group of information war technologists, it is the nexus of fantasy and reality. Being able to manufacture convincing audio or video, they say, might be the difference in a successful military operation or coup.
Allah on the Holodeck
Pentagon planners started to discuss digital morphing after Iraq's invasion of Kuwait in 1990. Covert operators kicked around the idea of creating a computer-faked videotape of Saddam Hussein crying or showing other such manly weaknesses, or in some sexually compromising situation. The nascent plan was for the tapes to be flooded into Iraq and the Arab world.
The tape war never proceeded, killed, participants say, by bureaucratic fights over jurisdiction, skepticism over the technology, and concerns raised by Arab coalition partners.
But the "strategic" PSYOPS scheming didn't die. What if the U.S. projected a holographic image of Allah floating over Baghdad urging the Iraqi people and Army to rise up against Saddam, a senior Air Force officer asked in 1990?
According to a military physicist given the task of looking into the hologram idea, the feasibility had been established of projecting large, three-dimensional objects that appeared to float in the air.
The Gulf War hologram story might be dismissed were it not the case that washingtonpost.com has learned that a super secret program was established in 1994 to pursue the very technology for PSYOPS application. The "Holographic Projector" is described in a classified Air Force document as a system to "project information power from space ... for special operations deception missions." -Washington Post (02/01/99)

► "Making Three-Dimensional Holograms Visible From All Sides
A technique for projecting holographic images to make both still and moving three-dimensional displays is undergoing development. Unlike older techniques based on stereoscopy to give the appearance of three-dimensionality, the developmental technique would not involve the use of polarizing goggles, goggles equipped with miniature video cameras, or other visual aids. Unlike in holographic display as practiced until now, visibility of the image would not be restricted to a narrow range of directions about a specified line of sight to a holographic projection plate. Instead, the image would be visible from any side or from the top; that is, from any position with a clear line of sight to the projection apparatus. In other words, the display could be viewed as though it were an ordinary three-dimensional object. The technique has obvious potential value for the entertainment industry, and for military uses like displaying battlefield scenes overlaid on three-dimensional terrain maps." -NASA's Jet Propulsion Laboratory (04/02)

► "Computer-generated characters are common in movies and video games and on the Internet. But imagine walking into a store and seeing a virtual model hovering in front of you, even welcoming you and selling you the latest makeup or clothing styles.
Cameron has been turning heads at Hugo Boss in New York.
He's a digital model projected into free space. Star Wars fans will recall R2D2 beaming Princess Leah into free space. But Cameron is in a real environment, not on a movie screen.
Cameron's highly realistic three-dimensional presence is completely computer-generated. He's the product of Virtual Characters of New York City.
"We can beam characters into your living room," says Lloyd Nathan, CEO of Virtual Characters.
"We have a series of optics that we've designed that can take a computer-generated image and project it onto a point in space where your eye is trained to focus," Nathan." -CBS (12/23/00)

► "Holographic Real Image Targets and Countermeasures
This Phase II program resulted in an entirely new process for producing uniform and virtually defect free large Photoresist Holographic Coatings (PHC) for applications ranging from military decoys and countermeasure systems to large scale 2-D and 3-D commercial displays. This process allows for holographic recording and mass-replication of various surface microstructures, and has been a gateway for Physical Optics Corporation (POC) entry into a large display arena.
This technology can produce unique 2-D and 3-D decoys and countermeasures that operate in the spectral range from UV to near IR.
Military decoys, camouflage systems, cockpit displays, head-mounted displays, advanced countermeasures, invisible lidars, range finders, and military optics." -Navy SBIR/STTR Bulletin Board

Making Three-Dimensional Holograms Visible From All Sides                          

Three-dimensional virtual reality displays could be viewed without visual aids.

NASA's Jet Propulsion Laboratory, Pasadena, California

A technique for projecting holographic images to make both still and moving three-dimensional displays is undergoing development. Unlike older techniques based on stereoscopy to give the appearance of three-dimensionality, the developmental technique would not involve the use of polarizing goggles, goggles equipped with miniature video cameras, or other visual aids. Unlike in holographic display as practiced until now, visibility of the image would not be restricted to a narrow range of directions about a specified line of sight to a holographic projection plate. Instead, the image would be visible from any side or from the top; that is, from any position with a clear line of sight to the projection apparatus. In other words, the display could be viewed as though it were an ordinary three-dimensional object. The technique has obvious potential value for the entertainment industry, and for military uses like displaying battlefield scenes overlaid on three-dimensional terrain maps.
An essential element of the technique is the use of block of silica aerogel as the display medium. Silica aerogel is an open-cell glass foam with a chemical composition similar to that of quartz and a density as low as about one-tenth that of quartz. The sizes of cell features are of the order of 100 Å. Silica aerogel is a suitable display medium because it is nearly completely transparent, with just enough scattering and reflection to enable the generation of a real image.
The figure illustrates a conceptual application in which a three-dimensional topographical map would be displayed by fusing images projected into a block of silica aerogel from four separate holograms. One could use static holograms to project still images, either alone or in combination with computer-generated holograms to project moving or still images. A computer-generated hologram would be downloaded into a large liquid-crystal, which would be illuminated by a laser projection apparatus to display the holographic image in the aerogel block. For example, the terrain image could be projected from static holograms, while a computer-generated hologram would be used to depict a vehicle moving on the terrain.
This work was done by Frederick Mintz, Tien-Hsin Chao, Peter Tsou, and Nevin Bryant of Caltech for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp under the Physical Sciences category.
 A Three-Dimensional Topographical Map, projected from holograms for display in a block of aerogel, would be visible from any position above the projection table. One of the holograms could be generated by a computer to depict a vehicle moving on the terrain.


This invention is owned by NASA, and a patent application has been filed. Inquiries concerning nonexclusive or exclusive license for its commercial development should be addressed to the Patent Counsel, NASA Management Office–JPL (818)354-2240. Refer to NPO-20101.
--------------------------------------------------------------------

Computer Image Beams Into Reality

CBS  --  Computer-generated characters are common in movies and video games and on the Internet. But imagine walking into a store and seeing a virtual model hovering in front of you, even welcoming you and selling you the latest makeup or clothing styles.

CBS News Correspondent Russ Mitchell reports on a New York-based technology company bringing virtual characters one step closer to everyday life.

Cameron has been turning heads at Hugo Boss in New York. "Hi there, my name is Cameron. Welcome to our new showrooms," says the virtual image.

He's a digital model projected into free space. Star Wars fans will recall R2D2 beaming Princess Leah into free space. But Cameron is in a real environment, not on a movie screen.

Cameron's highly realistic three-dimensional presence is completely computer-generated. He's the product of Virtual Characters of New York City.

"We can beam characters into your living room," says Lloyd Nathan, CEO of Virtual Characters. "We can have a character greet you when you come through the door."

"We have a series of optics that we've designed that can take a computer-generated image and project it onto a point in space where your eye is trained to focus," Nathan."What we found is that consumers see this image, and they immediately want to walk up and put a hand through it and see what this is."

Retailers and advertisers, always on the prowl for the new, new thing, are flocking to see virtual characters on display. "We have major cosmetic firms, major fashion firms coming in and saying we want to present our cosmetics, for example, to a consumer in an original attention-grabbing way," Nathan says.

Columbia Business School marketing professor Bernd Schmitt says novelty enables companies to break through the clutter of today's mass messages: "Customers are increasingly interested in having experiences in the store in addition to just buying the product, and this new approach fits right into that experiential strategy."

But will an eye-catching virtual model compel a shopper to buy?
Says Schmitt, "I'm not sure that the customer will really fully identify with that person because it is not a real person; it is a virtual character. But at the same time it will link that character to the brand and thereby build the brand image."

Imagination is the only limit at Virtual Characters. The company plans to tap the location-based entertainment market, and information kiosks are another target. "I could do a computer-generated Russ Mitchell that I could have hovering in free space," Nathan quips. "I could then control what you say effectively."
----------------------------------------------

Adaptive Camouflage

Sensor-and-display systems would create illusions of transparency.

Lightweight optoelectronic systems built around advanced image sensors and display panels have been proposed for making selected objects appear nearly transparent and thus effectively invisible. These systems are denoted "adaptive camouflage" because unlike traditional camouflage, they would generate displays that would change in response to changing scenes and lighting conditions.
-----------------------------------------------------------

Active camouflage

Active camouflage or adaptive camouflage is camouflage that adapts, often rapidly, to the surroundings of an object such as an animal or military vehicle. In theory, active camouflage could provide perfect concealment from visual detection.[1]
Active camouflage is used in several groups of animals, including reptiles on land, and cephalopod molluscs and flatfish in the sea. Animals achieve active camouflage both by color change and (among marine animals) by counterillumination.
In military usage, active camouflage remains at the research stage. Counterillumination camouflage was first investigated during the Second World War for marine use. Current research aims to achieve crypsis by using cameras to sense the visible background, and by controlling panels or coatings that can vary their appearance.
Active camouflage provides concealment in two ways:[2]


Definition

  • by making an object not merely generally similar to its surroundings, but effectively invisible through accurate mimicry, and
  • by changing the appearance of the object as changes occur in its background.
Active camouflage has its origins in the diffused lighting camouflage first tested on Canadian Navy corvettes including HMCS Rimouski during World War II, and later in the armed forces of the United Kingdom and the United States of America.[3]

In research


Illustrating the concept: active image capture and re-display creates an "illusory transparency", also known as "computer mediated reality" or "optical camouflage"
Current systems began with a United States Air Force program which placed low-intensity blue lights on aircraft as counterillumination camouflage. As night skies are not pitch black, a 100 percent black-colored aircraft might be rendered visible. By emitting a small amount of blue light, the aircraft blends more effectively into the night sky.
Active camouflage may now develop using organic light-emitting diodes (OLEDs) and other technologies which allow for images to be projected onto irregularly-shaped surfaces. Using visual data from a camera, an object could perhaps be camouflaged well enough to avoid detection by the human eye and optical sensors when stationary. Camouflage is weakened by motion, but active camouflage could still make moving targets more difficult to hit. However, active camouflage works best in one direction at a time, requiring knowledge of the relative positions of the observer and the concealed object.[4]
Active camouflage technology exists only in theory and proof-of-concept prototypes. In 2003 researchers at the University of Tokyo under Susumu Tachi created a prototype active camouflage system in which a video camera images the background and displays it on a cloth using an external projector.[5]
Phased array optics (PAO) would implement active camouflage, not by producing a two-dimensional image of background scenery on an object, but by computational holography to produce a three-dimensional hologram of background scenery on an object to be concealed. Unlike a two-dimensional image, the holographic image would appear to be the actual scenery behind the object independent of viewer distance or view angle.[6]
In 2011, BAE Systems announced their 'Adaptiv' infrared camouflage technology. It uses about 1000 hexagonal panels to cover the sides of a tank. The panels are rapidly heated and cooled to match either the temperature of the vehicle's surroundings, or one of the objects in the thermal cloaking system's "library" such as a truck, car or large rock.[7]

In animals


The flounder Bothus ocellatus can change its color to match its background in a few seconds
Active camouflage is present in several groups of animals including cephalopod molluscs, fish, and reptiles.
There are two mechanisms of active camouflage in animals: counterillumination camouflage, and color change.
Counter illumination camouflage is the production of light to blend in against a lit background. In the sea, light comes down from the surface, so when marine animals are seen from below, they appear darker than the background. Some species of cephalopod, such as the Midwater Squid and the Sparkling Enope Squid, produce light inphotophores on their undersides to match the background.[8] Bioluminescence is common among marine animals, so counterillumination camouflage may be widespread, though light has other functions, including attracting prey and signalling.
Color change permits camouflage against different backgrounds. Many cephalopods including octopuscuttlefish, and squid, and some terrestrial reptiles including chameleons and anoles can rapidly change color and pattern, though the major reasons for this include signalling, not only camouflage.[9][10]
Active camouflage is also used by many bottom-living flatfish such as plaicesole, and flounder that actively copy the patterns and colors of the seafloor below them.[11] For example, the tropical flounder Bothus ocellatus can match its pattern to "a wide range of background textures" in 2–8 seconds.[12]

References

  1. ^ Kent W. McKee and David W. Tack (2007). Active Camouflage For Infantry Headwear Applications. HumanSystems. pp. iii.
  2. ^ Kent W. McKee and David W. Tack (2007). Active Camouflage For Infantry Headwear Applications. HumanSystems. pp. 1.
  3. ^ "Naval Museum of Quebec"Diffused Lighting and its use in the Chaleur Bay. Royal Canadian Navy. Retrieved January 19, 2012.
  4. ^ Kent W. McKee and David W. Tack (2007). Active Camouflage For Infantry Headwear Applications. HumanSystems. pp. 10–11.
  5. ^ Time magazine: Invisibility
  6. ^ Wowk B (1996). "Phased Array Optics". In BC Crandall. Molecular Speculations on Global AbundanceMIT Press. pp. 147–160. ISBN 0-262-03237-6Archived from the original on 27 February 2007. Retrieved 2007-02-18.
  7. ^ "BBC News Technology"Tanks test infrared invisibility cloak. BBC. 5 September 2011. Retrieved March 27, 2012.
  8. ^ "Midwater Squid, Abralia veranyi"Midwater Squid, Abralia veranyi (with photograph). Smithsonian National Museum of Natural History. Retrieved November 28, 2011.
  9. ^ Forbes, Peter. Dazzled and Deceived: Mimicry and Camouflage. Yale, 2009.
  10. ^ Wallin, Margareta (2002). "Nature's Palette"Nature's Palette: How animals, including humans, produce colours. Bioscience-explained.org. pp. Vol 1, No 2, pages 1–12. Retrieved November 17, 2011.
  11. ^ Sumner, Francis B. (May 1911). "The adjustment of flatfishes to various backgrounds: A study of adaptive color change"Journal of Experimental Zoology 10 (4): 409–506.doi:10.1002/jez.1400100405.
  12. ^ Ramachandran, V.S. and C. W. Tyler, R. L. Gregory, D. Rogers-Ramachandran, S. Duensing, C. Pillsbury & C. Ramachandran (29 February 1996). "Letters to Nature"Rapid adaptive camouflage in tropical flounders. Nature (journal). pp. 379: 815–818.doi:10.1038/379815a0. Retrieved January 20, 2012.

Bibliography

  • Burr, E. Godfrey. Illumination for Concealment of Ships at Night. Transactions of the Royal Society of Canada Third series, volume XLI, May 1947, pages 45–54.
  • George R. Lindsey. (Editor) No Day Long Enough: Canadian Science in World War II. (Toronto: Canadian Institute of Strategic Studies, 1997), pages 172-173.
  • Summary Technical Report of Division 16, NDRC. Volume 2: Visibility Studies and Some Applications in the Field of Camouflage. (Washington, D.C.: Office of Scientific Research and Development, National Defense Research Committee, 1946), pages 14–16 and 225-241. [Declassified August 2, 1960].
  • Waddington, C.H. O.R. in World War 2: Operational Research Against the U-Boat. (London: Elek Science, 1973), pages 164-167.

External links

Multi-perspective background simulation cloaking process and apparatus 


Abstract
The invention described herein represents a significant improvement for the concealment of objects and people. Thousands of light receiving segmented pixels and sending segmented pixels are affixed to the surface of the object to be concealed. Each receiving segmented pixel receives colored light from the background of the object. Each receiving segmented pixel has a lens such that the light incident upon it is segmented to form focal points along a focal curve (or plane) according to the light's incident trajectory. In a first embodiment, this incident light is channeled by fiber optics to the side of the object which is opposite to each respective incident light segment. The light which was incident on a first side of the object traveling at a series of respective trajectories is thus redirected and exits on at least one second side of the object according to its original incident trajectory. In a second embodiment, this incident light is segmented according to trajectory, and detected electronically by photo diodes. It is then electronically reproduced on at least one second side of the object by arrayed LEDs. In this manor, incident light is reproduced as exiting light which mimics trajectory, color, and intensity such that an observer can "see through" the object to the background. In both embodiments, this process is repeated many times, in segmented pixel arrays, such that an observer looking at the object from any perspective actually "sees the background" of the object corresponding to the observer's perspective. The object having thus been rendered "invisible" to the observer.

Inventors:Alden, Ray M.(Raleigh, NC)
Correspondence Address:
    Ray M. Alden
    808 Lake Brandon Trail
    Raleigh
    NC
    27610
    US
Serial No.:970368
Series Code:09
Filed:October 2, 2001

Current U.S. Class:382/154
Class at Publication:382/154
International Class:G06K 009/00



Claims



I claim:

1. A means for receiving a light beam on a first side of an object and for generating a corresponding light beam on a second side of said object, wherein said corresponding light beam is intended to resemble the received light beam in trajectory, color and intensity.

2. An array of lenses for receiving light from at least two trajectories and a second array of lenses for emitting light in at least two trajectories; wherein the receiving light trajectories are equivalent to the emitting light trajectories.

3. A means for receiving a light beam on a first side of an object at a first trajectory and for channeling it to a second side of said object, where it is released at the same said trajectory.


Description



CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application is a Continuation-In-Part of application Ser. No. 09/757,053 filed Jan. 8, 2001.

BACKGROUND FIELD OF INVENTION

[0002] The concept of rendering objects invisible has long been contemplated in science fiction. Works such as Star Trek and The Invisible Man include means to render objects or people invisible. The actual achievement of making objects disappear however has heretofore been limited to fooling the human eye with "magic" tricks and camouflage. The latter often involves coloring the surface of an object such as a military vehicle with colors and patterns which make it blend in with its surrounding.

[0003] The process of collecting pictorial information in the form of two dimensional pixels and replaying it on monitors has been brought to a very fine art over the past one hundred years. Pryor cloaking devices utilize two dimensional pixels presented on a two dimensional screen. The devices do a poor job of enabling an observer to "see through" the hidden object and are not adequately portable for field deployment.

[0004] More recently, three dimensional pictorial "bubbles" have been created using optics and computer software to enable users to "virtually travel" from within a virtual bubble. The user interface for these virtual bubbles are nearly always presented on a two dims ional screen, with the user navigating to different views on the screen. When presented in a three dimensional user interface, the user is on the inside of these bubbles. These bubbles are not intended for use as nor are they suitable for cloaking an object.

[0005] The present invention creates a three dimensional virtual image bubble on the surface of an actual three dimensional object. By contrast, observers are on the outside of this three dimensional bubble. This three dimensional bubble renders the object invisible to observers who can only "see through" the object and observe the object's background. The present invention can make military and police vehicles and operatives invisible against their background from nearly any viewing perspective.

BACKGROUND DESCRIPTION OF PRIOR INVENTION

[0006] The concept of rendering objects invisible has long been contemplated in science fiction. Works such as Star Trek and The Invisible Man include means to render objects or people invisible. Prior Art illustrates the active camouflage approach used in U.S. Pat. No. 5,220,631. This approach is also described in "JPL New Technology report NPO-20706" August 2000. It uses an image recording camera on the first side of an object and a image display screen on the second (opposite) side of the object. This approach is adequate to cloak an object from one known observation point but is inadequate to cloak an object from multiple observation points simultaneously. In an effort to improve upon this, the prior art of U.S. Pat. No. 5,307,162 uses a curved image display screen to send an image of the cloaked object's background and multiple image recording cameras to receive the background image. All of the prior art uses one or more cameras which record two dimensional pixels which are then displayed on screens which are themselves two dimensional. These prior art systems are inadequate to render objects invisible from multiple observation points. Moreover, they are too cumbersome for practical deployment in the field.

[0007] The process of collecting pictorial information in the form of two dimensional pixels and replaying it on monitors has been brought to a very fine art over the past one hundred years. More recently, three dimensional pictorial "bubbles" have been created using optics and computer software to enable users to "virtually travel" from within a virtual bubble. The user interface for these virtual bubble are nearly always presented on a two dimensional screen, with the user navigating to different views on the screen. When presented in a three dimensional user interface, the user is on the inside of the bubble with the image on the inside of the bubble's surface.

[0008] The present invention creates a three dimensional virtual image bubble on the outside surface of an actual three dimensional object. By contrast, observers are on the outside of this three dimensional bubble. This three dimensional bubble renders the object within the bubble invisible to observers who can only "see through the object" and observe the object's background. The present invention can make military and police vehicles and operatives invisible against their background from nearly any viewing perspective.

BRIEF SUMMARY

[0009] The invention described herein represents a significant improvement for the concealment of objects and people. Thousands of directionally segmented light receiving pixels and directionally segmented light sending pixels are affixed to the surface of the object to be concealed. Each receiving pixel segment receives colored light from one point of the background of the object. Each receiving pixel segment is positioned such that the trajectory of the light striking it is known.

[0010] In a first, fiber optic embodiment, the light striking each receiving pixel segment is collected and channeled via fiber optic to a corresponding sending pixel segment. Said sending pixel segment's position corresponding to the known trajectory of the said light striking the receiving pixel surface. In this manner, light which was received on one side of the object is then sent on the same trajectory out a second side of the object. This process is repeated many times such that an observer looking at the object from nearly any perspective actually sees the background of the object corresponding to the observer's perspective. The object having been rendered "invisible" to the observer.

[0011] In a second, electronic embodiment, information describing the color and intensity of the light striking each receiving pixel segment (photo diode) is collected and sent to a corresponding sending pixel segment (LED). Said sending pixel segment's position corresponding to the known trajectory of the said light striking the receiving pixel surface. Light of the same color and intensity which was received on one side of the object is then sent on the same trajectory out a second side of the object. This process is repeated many times such that an observer looking at the object from nearly any perspective actually sees the background of the object corresponding to the observer's perspective. The object having been rendered "invisible" to the observer.

Objects and Advantages

[0012] Accordingly, several objects and advantages of the present invention are apparent. It is an object of the present invention to create a three dimensional virtual image bubble surrounding or on the surface of objects and people. Observers looking at this three dimensional bubble from any viewing perspective are only able to see the background of the object through the bubble. This enables military vehicles and operatives to be more difficult to detect and may save lives in many instances. Likewise, police operatives operating within a bubble can be made difficult to detect by criminal suspects. The apparatus is designed to be rugged, reliable, and light weight.

[0013] The electronic embodiment can alternatively be used as a recording means and a three dimensional display means. The present invention provides a novel means to record visual information and to playback visual information in a three dimensional manor which enables the viewer of the recording to see a different perspective of the recorded light as he moves around the display surfaces while viewing the recorded image.

[0014] Further objects and advantages will become apparent from the enclosed figures and specifications.

DRAWING FIGURES

[0015] FIG. 1 prior art illustrates the shortcomings of prior art of U.S. Pat. No. 5,220,631 and of U.S. Pat. No. 5,307,162.

[0016] FIG. 2 prior art further illustrates the shortcomings of prior art.

[0017] FIG. 2a prior art is a first observer's perspective of the FIG. 2 objects.

[0018] FIG. 2b prior art is a second observer's perspective of the FIG. 2 objects.

[0019] FIG. 3 shows the novel effect of the present invention rendering an object (asset) invisible from nearly any viewing perspective.

[0020] FIG. 4 is a side view of one segmented pixel of the fiber optic (first) embodiment.

[0021] FIG. 5 is a side view of one segmented pixel of the electronic (second) embodiment.

[0022] FIG. 6 illustrates the one to one light receiving and sending relationship of a fiber optic pixel.

[0023] FIG. 7 illustrates the many trajectory one to one light receiving and sending relationship of a fiber optic pixel.

[0024] FIG. 8 illustrates the many trajectory one to one light receiving and sending relationship of a electronic pixel array.

[0025] FIG. 9a shows a pixel mapping process where a first light trajectory is mapped from a pixel "M" segment to a pixel "N" segment.

[0026] FIG. 9b shows the pixel mapping process of FIG. 9 where a second light trajectory is mapped from a pixel "M" segment to a pixel "O" segment.

[0027] FIG. 10 illustrates that one pixel cell has segments that correspond to pixel cell segments on multiple sides of the cloaked object.

DETAILED DESCRIPTION OF THE INVENTION

[0028] FIG. 1 prior art, illustrates the shortcomings of prior art of U.S. Pat. No. 5,220,631 and of U.S. Pat. No. 5,307,162. The top half of FIG. 1 illustrates the active camouflage approach used in U.S. Pat. No. 5,220,631. This approach is also described in "JPL New Technology report NPO-20706" August 2000. Asset 1 34 has a screen or image sender 37 on one side of it. An image receiver 35 on the opposite side of Asset 1 captures an image of the background which is then presented on the image sender. Background point X 32 is represented on the screen as X' 36. Note that for an observer at point S 31 this scheme does present a reasonable cloaking apparatus because background points line up with the observer such as X compared with X'. Unfortunately, for observation positions located anywhere other than S, the image sender presents an image that does not correspond with the background. An observer at point T 33 for example can see Asset 1 and can also see back ground point X and background representation point X'. The Asset is only cloaked from a narrow range of viewing positions. Additionally, when Asset 1 needs to be repositioned, it would be very cumbersome to concurrently reposition the image sender display screen. Obviously this two dimensional display screen approach in the prior art has significant short-comings as field deployable active camouflage.

[0029] The bottom half of FIG. 1--Prior Art illustrates the art of U.S. Pat. No. 5,307,162. Here the curved image sender display screen 47 together with multiple image receiving cameras 43 are used to overcome the shortcomings of the above discussed flat screen approach. An observer at point U 39 does see a reasonable representation of the background behind Asset 2 44. The observer at point V 49 however actually sees two representations of point Y 41 at Y' 45 and Y" 51. When considering deployment theaters where surroundings are distinctive such as buildings in urban areas, especially where the enemy has familiarity with the locations of background structures, such easily detected problems with the existing active camouflage schemes are not acceptable. Additionally, when Asset 2 needs to be repositioned, it would be very cumbersome to concurrently reposition the image sender display screen. Moreover, in today's complex theater conditions it is often not possible to predetermine from which viewing perspective an enemy will be seeing our asset, indeed the enemy may be on all sides of the asset. In essence, this is still a two dimensional representation presented on a curved two dimensional display screen.

[0030] FIG. 2 prior art further illustrates the shortcomings of prior art described in FIG. 1. FIG. 2 depicts a very simple cloaking scenario, that of cloaking a Ship 63 against a Horizon 65. A deployed display screen 61 is deployed between two observers at points P 67 and Q 69. The Screen duplicates the image of the Horizon behind the Ship. FIG. 2a prior art is a first observer's (P) perspective of the FIG. 2 objects. This scheme works well from the P observation point, as depicted in FIG. 2a, P's View is that of an uninterrupted Horizon 65a compared to the display screen 61a. FIG. 2b prior art is a second observer's (Q) perspective of the FIG. 2 objects. Q can be either at lower elevation or at a greater distance than is P. In either case, Q's View as illustrated in FIG. 2b, shows a significant distortion in the positioning of the Horizon 65b compared to display screen 61b. The FIG. 2 sequence underscores the problem with prior art attempts to cloak even against quite simple backgrounds.

[0031] FIG. 3 shows an ideal cloaking system that is achievable by the present art. The novel effect of the present invention is that of rendering an object invisible from nearly any viewing perspective. The top section of FIG. 3 illustrates what the present technology (referred to herein as 3D Pixel Skin) can achieve. Background object E 71 can be observed at the correct light trajectory by an observer as he moves past the cloaked object along an observer path 75. By receiving background light from point E at a large number of points on the asset 3 73, replicating the background point E at a large number of points located on the surface of asset 3, the cloak accurately simulates how a background is perceived by any observer in any position and effectively renders the asset 3 invisible to an observer even as the observer moves around relative to the asset and in close proximity to the asset. Light reflected off of 71 is collected by light collectors on the asset which separates it according to its incident trajectory. A first trajectory 77 is collected on one side of the asset, it is then channeled by fiber optics to exit (or in an alternate embodiment electronically reproduced to exit) from a point on the asset corresponding to (directly in line with) its original trajectory as exiting light 79. This process is repeated many times such that light from 71 (and all other background points in all directions) is collected on one side of the asset and then exits on the other side of the asset. Thus of the background points can be "seen through" the asset rendering the asset invisible. As will be further discussed later, the 3D pixel skin consists of preformed rigid panels that are affixed to the surface of the asset and connected to one another such that each light receiving pixel segment (later defined) is communicating with a corresponding light sending pixel segment (later defined) and wherein corresponding segments are along the same light trajectories such as 77 and 79.

[0032] The bottom section of FIG. 3 further illustrates that the 3D Pixel Skin Cloaked Asset 87 is invisible to any observer at any observation point due to light receipt and transmittance (or light simulation in the electronic embodiment) from a vast number of trajectories. Observation points F 81 and G 89 are examples of two such observation points that both simultaneously see light trajectories and colors from all background objects with the correct light trajectories and orientations. A first light trajectory 85 is collected at the surface of 87 said light is diverted (or recorded in the electronic embodiment) such that it exits on it original trajectory as exiting light 83. Note that the observer can see all of the light trajectories coming from all of the background points as though the 87 wasn't there. Simultaneously, 89 also sees all of the background points as if the 87 wasn't there. For example, light 91 from a sample background point is received and diverted (or electronically reproduced) as asset 4 exiting light 93 such that the 89 observer can "see through" 87 and observe 91. As will be later described, collecting light from many different trajectories at many different points on all sides of an asset and then diverting that light in a fiber optic embodiment (or reproducing it in an electronic embodiment) such that light exits the asset on identical trajectories, at identical intensities, and with identical colors (essentially equivalent) to the light that is incident upon the surface of the asset, renders the asset "invisible" from nearly any observation point.

[0033] FIG. 4 is a cut-away side view of one segmented pixel of the fiber optic (first) embodiment. The pixel in FIG. 4 both receives light from and sends light to multiple directions simultaneously though the arrows for simplicity show light going only into the pixel. A primary optic 101 causes received light from different directions (trajectories) to form respective focal points along a focal curve (or plain). Received trajectory 107 represents light of one such trajectory (or from one background point). The 107 is focused by 103 and exits as focusing light 109 traveling toward a focal curve (or plane). The focal curve is divided into segments such as first focal collecting segment 111, each focal segment receives light from a different origination trajectory or background point. Each of these segments feeds the light it collects into a respective fiber optic such as first fiber optic relay 113. The fiber optic is welded along the focal curve such that the 109 is injected efficiently into the 113. All of the other fibers (possibly hundreds) are likewise welded such that the focal curve collecting apparatus is a rigid structure. This rigid structure as described later is rigidly connected to the 103 such that the components shown in FIG. 4 are all rigidly connected together. Note that each pixel has an array of fiber optics each of which collects light from a single focal point, wherein each focal point contains light from a common trajectory (or origination point). Similarly a second light trajectory 101 is focused by 103 to be injected into a fiber optic 117 which resides in a focal curve segment 115. Many such fibers receive light from many such light trajectories. All the light trajectories having been divided into focal points for injection into the respective fibers. It should be noted as is made clear later that light also simultaneously travels out of the fibers and 103 in the exact opposite directions. (This can be visualized by reversing the directions of all of the arrows on the depicted light.) The segmented focal curve collector can be manufactured as a one piece bowl shaped transparent plastic structure to which fiber optics can be affixed by a welding or gluing process.

[0034] FIG. 5 is a side view of one segmented pixel of the electronic (second) embodiment. FIG. 5 illustrates an electrooptic sender and receiver of light from a range of trajectories. A primary optic 123 causes light from each respective trajectory (or background point) to form a respective focal point along a focal curve (or plane). Only two incoming trajectories are shown but in practice many trajectories of light enter the primary optic and form focal points along the focal curve (or plane). Positioned on the focal curve is a segmented array of photo diodes and LEDs. 127 being one photodiode which collects light from one focal point and 131 being one such LED that sends light (not shown) from a given focal point to the primary optic. Wires such as receiving wire 129 carry the electronic signal describing received light to a CPU (not shown) and wires such as sending wire 132 carry the energy from a CPU and driver circuit to power a respective LED to send light (not shown). The segmented electronic pixel receives light from many trajectories (background points) and sends light to many trajectories (to simulate light received from other pixels as later described.) The focal curve (or plane) is manufactured identically to that of FIG. 4 except LED's such as 131 and photo diodes such as 127 are embedded along the focal curve to send and receive light respectively. All of the components described in FIG. 5 are connected to form one rigid pixel cell which itself is part of a large panel of similar pixel cells.

[0035] FIG. 6 illustrates the one to one light receiving and sending relationship of a fiber optic pixel segment. FIG. 6 illustrates some pixels similar to those of FIG. 4 (or alternately FIG. 5). Light traveling in a first trajectory 155 passes through a primary optic 151 where it is caused to form a focal point along a focal curve 153. Located on the focal curve is a fiber optic 157 which collects the focused light and carries it to mapping center 159. The map of where the 155 light should be directed (such that it exits on the same trajectory at which it was incident) has been pre-established in a mapping process as discussed later. The mapping center redirects the light to a corresponding second fiber 161. The 161 fiber delivers the light to the focal curve of a corresponding pixel cell 163 from which the light diverges until it reaches a corresponding second primary lens 165 which sends the light on a desired trajectory 167. Note that the 167 trajectory corresponds to (is the same as) the path that the 155 light would have traveled had it not encountered the cloaked asset. An observer therefore sees the 155 light just as he would have had the cloaked object not been there. In a rigid structure, light traveling to the 151 pixel from the 155 relative trajectory, will always emerge from the 165 pixel at the 167 trajectory. All of the light arrows can be reversed and in practice, light is always traveling in both directions. The same pixel combination also cooperates in reverse, with light entering the opposite trajectory at 167 being redirected to exit in the opposite direction at 155. In a fixed map (rigid system), the 157 and 161 will always carry light of identical trajectories in both directions simultaneously. In practice a cloaked object is covered by many such segmented pixel cells each dividing light into many distinct incident and exiting trajectories. This causes an observer to "see through" the asset to the background behind the asset. It should be noted that sheets of segmented pixel skin consist of the focal plane receiving apparatus 168, a rigid connecting structure 169, and the primary optic 170. To the sheets are attached the hundreds or thousands of individual fibers (or in the alternate embodiment LEDs and photodiodes). These sheets are rigid and can be mounted on the surface of any asset. Each sheet is plugged into either one another or into a centralized mapping center where inter-pixel segment communication is arrange such as 159.

[0036] FIG. 7 illustrates the many to on light receiving and sending relationship of a segmented fiber optic pixel (a pixel receives light from many directions each of which is segmented and sent to a respective segment of many pixels). FIG. 7 illustrates some pixel cells operating cooperatively with light from multiple trajectories. Light from a first trajectory 171, light from a second trajectory 173 and light from a third trajectory 175, each enter a primary optic. Each light trajectory is caused to form respective focal points along a focal curve 177. At the focal curve, an array of fiber optics, each respectively collects light from one original trajectory. A fiber optic bundle 179 carries the light to a fiber optic mapping center 180 where the light is redirected to corresponding fiber optic cables 181. The 171 light is directed out a first corresponding pixel at its original trajectory 183. The 173 light is directed out a second corresponding pixel at its original trajectory 185. The 175 light is directed out a third corresponding pixel at its original trajectory 187. Thus light received from one pixel cell is divided into its origination trajectories (or background points) and directed to the series of pixel cells that corresponds to each respective trajectory. If a single pixel cell has one hundred receiving segments, it will have relationships with one hundred corresponding sending segments each located in one of one hundred pixel cells. Again, the light flows exactly in the reverse direction simultaneously.

[0037] FIG. 8 illustrates the many trajectories of light receiving and many trajectories of light sending occurring concurrently in the electronic (second embodiment) pixel array. FIG. 8 illustrates a series of pixel cells operating cooperatively. In practice light is being received by each pixel from a multitude of directions 191 and light is being sent from each pixel in a multitude of opposite directions 211. FIG. 8 shows the LED and photodiode arrays within each pixel operating cooperatively to receive light, send electric signals representing the light's frequencies and intensity, these signals are wired to an electronic mapping center 199 which amplifies the signals and sends corresponding power to the respective LEDs that can produce light which will simulate that received and send it at the same trajectory as received. Each pixel both receives and sends light. One additional use can come from the electro-optic embodiment (as opposed to the all fiber optic embodiment). Namely, since all of the information about the light coming into the cloaked asset is passed through a CPU in the 199, the information can be fed to a VR viewing system 201, a person inside of the cloaked asset, wearing a head mounted virtual reality (VR) unit can "see through" the walls of the cloaked asset. They can see a precise three dimensional representation of their surroundings from within the cloaked asset.

[0038] In practice, many thousands of such pixel cells, each containing tens of focal point receiving segments all operating collectively are required to achieve near invisibility from any observing perspective. It should be underscored that each pixel receives light from a multitude of directions. If a pixel has one hundred focal point collectors, they will cooperate with one hundred other pixels which will send light in one hundred different trajectories. The same one hundred pixels will each send light from one respective trajectory to that same pixel cell. This can be seen in the mapping illustrations FIG. 9a and 9b. Further, the pixel cells are connected to one another to form a sturdy flat panel. The deployed panel is glued or other wise fastened to the surface of the object which is to be cloaked. This is the case with the assault beach craft of FIGS. 9a and 9b.

[0039] FIG. 9a shows a pixel mapping process where a first light trajectory is mapped from a pixel "M" 227 segment to a pixel "N" 225 segment. FIG. 9b shows the pixel mapping process of FIG. 9a where a second light trajectory is mapped from a pixel "M" 227a segment to a pixel "O" 231 segment. FIG. 9a and 9b illustrate how lasers can be used to construct a map of which pixel segments correspond with which pixel segments. It is assumed that a navy beach assault craft 221 depicted has been fitted with permanent 3D pixel skin. When mapping the 3D pixel skin, Laser 1 223 and Laser 2 229 are always sending beams that are exactly opposite. At the mapping center, an electronic means for identifying which segment of which pixel cell is receiving laser light is utilized. In the fiber optic embodiment, a means for detecting which fibers are receiving the respective two laser lights is utilized. In FIG. 9a, Laser 1 is registered by a segment of pixel cell N, Laser 2 which is exactly opposite in trajectory of Laser 1 is registered in a segment of pixel cell M. These two respective segments are therefore mapped as a corresponding set of segments that will always communicate with one another. (Their fiber optic cables can be welded together at the mapping center, or alternately in the electrooptic embodiment, a CPU and memory can make note that they are a corresponding pair of pixel segments.) In FIG. 9b, Laser 2 strikes a second segment of pixel M 227a, while Laser 1 is registered by a segment of pixel cell O 231. These two segments are therefore mapped as a corresponding segment pair. Note that if M has one hundred segments, it will communicate with one hundred segments of one hundred different pixel cells. It is important to note conceptually that the pixel segments that correspond to the M pixel segments will be located on every surface of the Army beach assault craft (as is illustrated in FIG. 10). This is why an observer viewing from any perspective will see an accurate representation of the cloaked object's background. Once a number of Pixel segments are mapped by laser, the rest of the pixels can be mapped by logic in software designed to mathematically create the map. Alternately, the laser process can be used to generate the whole pixel map. In a rigid application, once the map is generated it is permanent. It can however periodically be recalibrated to ensure its precision. In the fiber optic embodiment, each of the fibers of each respective pixel cell segment is paired physically by splicing or welding with one corresponding fiber. In the electronic LED photodiode embodiment, each receiving pixel segment is associated with one sending segment with this relationship being stored in a computer memory.

[0040] FIG. 10 is an asset covered in segmented pixel skin. It illustrates that one representative pixel cell has segments that correspond to pixel cell segments on multiple sides of the cloaked object. FIG. 10 illustrates five different trajectories of light entering one pixel cell which is one among many pixel cells on a mounted 3D Pixel Skin covered asset. Note that each of the five different trajectories emerges from a different surface. Each of the five exiting trajectories is the same as its respective entering trajectory. In practice, each pixel cell may separate light into tens of different relative trajectories some of which emerge from every surface of the object. Light enters a pixel cell at a first trajectory 241 and exits on the same first trajectory at 241a. Light enters the same pixel cell at a second trajectory 243 and exits at that same second trajectory at 243a. Light enters the same pixel cell at a third trajectory at 245 and exits at that same third trajectory at 245a. Light enters the same pixel cell at a fourth trajectory 247 and exits at the same fourth trajectory 247a. Light enters the same pixel at a fifth trajectory 249 and exits at that same fifth trajectory 249a. Thus light received from one pixel cell on a first surface exits from all other surfaces of the cloaked asset. In a perfect cloaking system, the one pixel on a first side of the cloaked object would have similar relationships with every pixel on every other side of the cloaked asset. This causes the observer who is moving around the cloaked object to see every background point through every pixel on the object. In practical application some averaging would occur such that the background reproduction is not perfect.

Operation of the Invention

[0041] FIG. 1 prior art, illustrates the shortcomings of prior art of U.S. Pat. No. 5,220,631 and of U.S. Pat. No. 5,307,162. The top half of FIG. 1 illustrates the active camouflage approach used in U.S. Pat. No. 5,220,631. This approach is also described in "JPL New Technology report NPO-20706" August 2000. Asset 1 34 has a screen or image sender 37 on one side of it. An image receiver 35 on the opposite side of Asset 1 captures an image of the background which is then presented on the image sender. Background point X 32 is represented on the screen as X' 36. Note that for an observer at point S 31 this scheme does present a reasonable cloaking apparatus because background points line up with the observer such as X compared with X'. Unfortunately, for observation positions located anywhere other than S, the image sender presents an image that does not correspond with the background. An observer at point T 33 for example can see Asset 1 and can also see back ground point X and background representation point X'. The Asset is only cloaked from a narrow range of viewing positions. Additionally, when Asset 1 needs to be repositioned, it would be very cumbersome to concurrently reposition the image sender display screen. Obviously this two dimensional display screen approach in the prior art has significant short-comings as field deployable active camouflage.

[0042] The bottom half of FIG. 1--Prior Art illustrates the art of U.S. Pat. No. 5,307,162. Here the curved image sender display screen 47 together with multiple image receiving cameras 43 are used to overcome the shortcomings of the above discussed flat screen approach. An observer at point U 39 does see a reasonable representation of the background behind Asset 2 44. The observer at point V 49 however actually sees two representations of point Y 41 at Y' 45 and Y'' 51. When considering deployment theaters where surroundings are distinctive such as buildings in urban areas, especially where the enemy has familiarity with the locations of background structures, such easily detected problems with the existing active camouflage schemes are not acceptable. Additionally, when Asset 2 needs to be repositioned, it would be very cumbersome to concurrently reposition the image sender display screen. Moreover, in today's complex theater conditions it is often not possible to predetermine from which viewing perspective an enemy will be seeing our asset, indeed the enemy may be on all sides of the asset. In essence, this is still a two dimensional representation presented on a curved two dimensional display screen.

[0043] FIG. 2 prior art further illustrates the shortcomings of prior art described in FIG. 1. FIG. 2 depicts a very simple cloaking scenario, that of cloaking a Ship 63 against a Horizon 65. A deployed display screen 61 is deployed between two observers at points P 67 and Q 69. The Screen duplicates the image of the Horizon behind the Ship. FIG. 2a prior art is a first observer's (P) perspective of the FIG. 2 objects. This scheme works well from the P observation point, as depicted in FIG. 2a, P's View is that of an uninterrupted Horizon 65a compared to the display screen 61a. FIG. 2b prior art is a second observer's (Q) perspective of the FIG. 2 objects. Q can be either at lower elevation or at a greater distance than is P. In either case, Q's View as illustrated in FIG. 2b, shows a significant distortion in the positioning of the Horizon 65b compared to display screen 61b. The FIG. 2 sequence underscores the problem with prior art attempts to cloak even against quite simple backgrounds.

[0044] FIG. 3 shows an ideal cloaking system that is achievable by the present art. The novel effect of the present invention is that of rendering an object invisible from nearly any viewing perspective. The top section of FIG. 3 illustrates what the present technology (referred to herein as 3D Pixel Skin) can achieve. Background object E 71 can be observed at the correct light trajectory by an observer as he moves past the cloaked object along an observer path 75. By receiving background light from point E at a large number of points on the asset 3 73, replicating the background point E at a large number of points located on the surface of asset 3, the cloak accurately simulates how a background is perceived by any observer in any position and effectively renders the asset 3 invisible to an observer even as the observer moves around relative to the asset and in close proximity to the asset. Light reflected off of 71 is collected by light collectors on the asset which separates it according to its incident trajectory. A first trajectory 77 is collected on one side of the asset, it is then channeled by fiber optics to exit (or in an alternate embodiment electronically reproduced to exit) from a point on the asset corresponding to (directly in line with) its original trajectory as exiting light 79. This process is repeated many times such that light from 71 (and all other background points in all directions) is collected on one side of the asset and then exits on the other side of the asset. Thus of the background points can be "seen through" the asset rendering the asset invisible. As will be further discussed later, the 3D pixel skin consists of preformed rigid panels that are affixed to the surface of the asset and connected to one another such that each light receiving pixel segment (later defined) is communicating with a corresponding light sending pixel segment (later defined) and wherein corresponding segments are along the same light trajectories such as 77 and 79.

[0045] The bottom section of FIG. 3 further illustrates that the 3D Pixel Skin Cloaked Asset 87 is invisible to any observer at any observation point due to light receipt and transmittance (or light simulation in the electronic embodiment) from a vast number of trajectories. Observation points F 81 and G 89 are examples of two such observation points that both simultaneously see light trajectories and colors from all background objects with the correct light trajectories and orientations. A first light trajectory 85 is collected at the surface of 87 said light is diverted (or recorded in the electronic embodiment) such that it exits on it original trajectory as exiting light 83. Note that the observer can see all of the light trajectories coming from all of the background points as though the 87 wasn't there. Simultaneously, 89 also sees all of the background points as if the 87 wasn't there. For example, light 91 from a sample background point is received and diverted (or electronically reproduced) as asset 4 exiting light 93 such that the 89 observer can "see through" 87 and observe 91. As will be later described, collecting light from many different trajectories at many different points on all sides of an asset and then diverting that light in a fiber optic embodiment (or reproducing it in an electronic embodiment) such that light exits the asset on identical trajectories, at identical intensities, and with identical colors (essentially equivalent) to the light that is incident upon the surface of the asset, renders the asset "invisible" from nearly any observation point.

[0046] FIG. 4 is a cut-away side view of one segmented pixel of the fiber optic (first) embodiment. The pixel in FIG. 4 both receives light from and sends light to multiple directions simultaneously though the arrows for simplicity show light going only into the pixel. A primary optic 101 causes received light from different directions (trajectories) to form respective focal points along a focal curve (or plain). Received trajectory 107 represents light of one such trajectory (or from one background point). The 107 is focused by 103 and exits as focusing light 109 traveling toward a focal curve (or plane). The focal curve is divided into segments such as first focal collecting segment 111, each focal segment receives light from a different origination trajectory or background point. Each of these segments feeds the light it collects into a respective fiber optic such as first fiber optic relay 113. The fiber optic is welded along the focal curve such that the 109 is injected efficiently into the 113. All of the other fibers (possibly hundreds) are likewise welded such that the focal curve collecting apparatus is a rigid structure. This rigid structure as described later is rigidly connected to the 103 such that the components shown in FIG. 4 are all rigidly connected together. Note that each pixel has an array of fiber optics each of which collects light from a single focal point, wherein each focal point contains light from a common trajectory (or origination point). Similarly a second light trajectory 101 is focused by 103 to be injected into a fiber optic 117 which resides in a focal curve segment 115. Many such fibers receive light from many such light trajectories. All the light trajectories having been divided into focal points for injection into the respective fibers. It should be noted as is made clear later that light also simultaneously travels out of the fibers and 103 in the exact opposite directions. (This can be visualized by reversing the directions of all of the arrows on the depicted light.) The segmented focal curve collector can be manufactured as a one piece bowl shaped transparent plastic structure to which fiber optics can be affixed by a welding or gluing process.

[0047] FIG. 5 is a side view of one segmented pixel of the electronic (second) embodiment. FIG. 5 illustrates an electrooptic sender and receiver of light from a range of trajectories. A primary optic 123 causes light from each respective trajectory (or background point) to form a respective focal point along a focal curve (or plane). Only two incoming trajectories are shown but in practice many trajectories of light enter the primary optic and form focal points along the focal curve (or plane). Positioned on the focal curve is a segmented array of photo diodes and LEDs. 127 being one photodiode which collects light from one focal point and 131 being one such LED that sends light (not shown) from a given focal point to the primary optic. Wires such as receiving wire 129 carry the electronic signal describing received light to a CPU (not shown) and wires such as sending wire 132 carry the energy from a CPU and driver circuit to power a respective LED to send light (not shown). The segmented electronic pixel receives light from many trajectories (background points) and sends light to many trajectories (to simulate light received from other pixels as later described.) The focal curve (or plane) is manufactured identically to that of FIG. 4 except LED's such as 131 and photo diodes such as 127 are embedded along the focal curve to send and receive light respectively. All of the components described in FIG. 5 are connected to form one rigid pixel cell which itself is part of a large panel of similar pixel cells.

[0048] FIG. 6 illustrates the one to one light receiving and sending relationship of a fiber optic pixel segment. FIG. 6 illustrates some pixels similar to those of FIG. 4 (or alternately FIG. 5). Light traveling in a first trajectory 155 passes through a primary optic 151 where it is caused to form a focal point along a focal curve 153. Located on the focal curve is a fiber optic 157 which collects the focused light and carries it to mapping center 159. The map of where the 155 light should be directed (such that it exits on the same trajectory at which it was incident) has been pre-established in a mapping process as discussed later. The mapping center redirects the light to a corresponding second fiber 161. The 161 fiber delivers the light to the focal curve of a corresponding pixel cell 163 from which the light diverges until it reaches a corresponding second primary lens 165 which sends the light on a desired trajectory 167. Note that the 167 trajectory corresponds to (is the same as) the path that the 155 light would have traveled had it not encountered the cloaked asset. An observer therefore sees the 155 light just as he would have had the cloaked object not been there. In a rigid structure, light traveling to the 151 pixel from the 155 relative trajectory, will always emerge from the 165 pixel at the 167 trajectory. All of the light arrows can be reversed and in practice, light is always traveling in both directions. The same pixel combination also cooperates in reverse, with light entering the opposite trajectory at 167 being redirected to exit in the opposite direction at 155. In a fixed map (rigid system), the 157 and 161 will always carry light of identical trajectories in both directions simultaneously. In practice a cloaked object is covered by many such segmented pixel cells each dividing light into many distinct incident and exiting trajectories. This causes an observer to "see through" the asset to the background behind the asset. It should be noted that sheets of segmented pixel skin consist of the focal plane receiving apparatus 168, a rigid connecting structure 169, and the primary optic 170. To the sheets are attached the hundreds or thousands of individual fibers (or in the alternate embodiment LEDs and photodiodes). These sheets are rigid and can be mounted on the surface of any asset. Each sheet is plugged into either one another or into a centralized mapping center where inter-pixel segment communication is arrange such as 159.

[0049] FIG. 7 illustrates the many to on light receiving and sending relationship of a segmented fiber optic pixel (a pixel receives light from many directions each of which is segmented and sent to a respective segment of many pixels). FIG. 7 illustrates some pixel cells operating cooperatively with light from multiple trajectories. Light from a first trajectory 171, light from a second trajectory 173 and light from a third trajectory 175, each enter a primary optic. Each light trajectory is caused to form respective focal points along a focal curve 177. At the focal curve, an array of fiber optics, each respectively collects light from one original trajectory. A fiber optic bundle 179 carries the light to a fiber optic mapping center 180 where the light is redirected to corresponding fiber optic cables 181. The 171 light is directed out a first corresponding pixel at its original trajectory 183. The 173 light is directed out a second corresponding pixel at its original trajectory 185. The 175 light is directed out a third corresponding pixel at its original trajectory 187. Thus light received from one pixel cell is divided into its origination trajectories (or background points) and directed to the series of pixel cells that corresponds to each respective trajectory. If a single pixel cell has one hundred receiving segments, it will have relationships with one hundred corresponding sending segments each located in one of one hundred pixel cells. Again, the light flows exactly in the reverse direction simultaneously.

[0050] FIG. 8 illustrates the many trajectories of light receiving and many trajectories of light sending occurring concurrently in the electronic (second embodiment) pixel array. FIG. 8 illustrates a series of pixel cells operating cooperatively. In practice light is being received by each pixel from a multitude of directions 191 and light is being sent from each pixel in a multitude of opposite directions 211. FIG. 8 shows the LED and photodiode arrays within each pixel operating cooperatively to receive light, send electric signals representing the light's frequencies and intensity, these signals are wired to an electronic mapping center 199 which amplifies the signals and sends corresponding power to the respective LEDs that can produce light which will simulate that received and send it at the same trajectory as received. Each pixel both receives and sends light. One additional use can come from the electro-optic embodiment (as opposed to the all fiber optic embodiment). Namely, since all of the information about the light coming into the cloaked asset is passed through a CPU in the 199, the information can be fed to a VR viewing system 201, a person inside of the cloaked asset, wearing a head mounted virtual reality (VR) unit can "see through" the walls of the cloaked asset. They can see a precise three dimensional representation of their surroundings from within the cloaked asset.

[0051] In practice, many thousands of such pixel cells, each containing tens of focal point receiving segments all operating collectively are required to achieve near invisibility from any observing perspective. It should be underscored that each pixel receives light from a multitude of directions. If a pixel has one hundred focal point collectors, they will cooperate with one hundred other pixels which will send light in one hundred different trajectories. The same one hundred pixels will each send light from one respective trajectory to that same pixel cell. This can be seen in the mapping illustrations FIG. 9a and 9b. Further, the pixel cells are connected to one another to form a sturdy flat panel. The deployed panel is glued or other wise fastened to the surface of the object which is to be cloaked. This is the case with the assault beach craft of FIGS. 9a and 9b.

[0052] FIG. 9a shows a pixel mapping process where a first light trajectory is mapped from a pixel "M" 227 segment to a pixel "N" 225 segment. FIG. 9b shows the pixel mapping process of FIG. 9a where a second light trajectory is mapped from a pixel "M" 227a segment to a pixel "O" 231 segment. FIG. 9a and 9b illustrate how lasers can be used to construct a map of which pixel segments correspond with which pixel segments. It is assumed that a navy beach assault craft 221 depicted has been fitted with permanent 3D pixel skin. When mapping the 3D pixel skin, Laser 1 223 and Laser 2 229 are always sending beams that are exactly opposite. At the mapping center, an electronic means for identifying which segment of which pixel cell is receiving laser light is utilized. In the fiber optic embodiment, a means for detecting which fibers are receiving the respective two laser lights is utilized. In FIG. 9a, Laser 1 is registered by a segment of pixel cell N, Laser 2 which is exactly opposite in trajectory of Laser 1 is registered in a segment of pixel cell M. These two respective segments are therefore mapped as a corresponding set of segments that will always communicate with one another. (Their fiber optic cables can be welded together at the mapping center, or alternately in the electrooptic embodiment, a CPU and memory can make note that they are a corresponding pair of pixel segments.) In FIG. 9b, Laser 2 strikes a second segment of pixel M 227a, while Laser 1 is registered by a segment of pixel cell O 231. These two segments are therefore mapped as a corresponding segment pair. Note that if M has one hundred segments, it will communicate with one hundred segments of one hundred different pixel cells. It is important to note conceptually that the pixel segments that correspond to the M pixel segments will be located on every surface of the Army beach assault craft (as is illustrated in FIG. 10). This is why an observer viewing from any perspective will see an accurate representation of the cloaked object's background. Once a number of Pixel segments are mapped by laser, the rest of the pixels can be mapped by logic in software designed to mathematically create the map. Alternately, the laser process can be used to generate the whole pixel map. In a rigid application, once the map is generated it is permanent. It can however periodically be recalibrated to ensure its precision. In the fiber optic embodiment, each of the fibers of each respective pixel cell segment is paired physically by splicing or welding with one corresponding fiber. In the electronic LED photodiode embodiment, each receiving pixel segment is associated with one sending segment with this relationship being stored in a computer memory. FIG. 10 is an asset covered in segmented pixel skin. It illustrates that one representative pixel cell has segments that correspond to pixel cell segments on multiple sides of the cloaked object. FIG. 10 illustrates five different trajectories of light entering one pixel cell which is one among many pixel cells on a mounted 3D Pixel Skin covered asset. Note that each of the five different trajectories emerges from a different surface. Each of the five exiting trajectories is the same as its respective entering trajectory. In practice, each pixel cell may separate light into tens of different relative trajectories some of which emerge from every surface of the object. Light enters a pixel cell at a first trajectory 241 and exits on the same first trajectory at 241a. Light enters the same pixel cell at a second trajectory 243 and exits at that same second trajectory at 243a. Light enters the same pixel cell at a third trajectory at 245 and exits at that same third trajectory at 245a. Light enters the same pixel cell at a fourth trajectory 247 and exits at the same fourth trajectory 247a. Light enters the same pixel at a fifth trajectory 249 and exits at that same fifth trajectory 249a. Thus light received from one pixel cell on a first surface exits from all other surfaces of the cloaked asset. In a perfect cloaking system, the one pixel on a first side of the cloaked object would have similar relationships with every pixel on every other side of the cloaked asset. This causes the observer who is moving around the cloaked object to see every background point through every pixel on the object. In practical application some averaging would occur such that the background reproduction is not perfect.

Conclusion, Ramifications, and Scope

[0053] Thus the reader will see that the Multi-Perspective Background Simulation Cloaking Process and Apparatus of this invention provides a highly functional and reliable means for using well known technology to conceal the presence of an object (or asset). This is achieved optically in a first embodiment and electronically in a second embodiment.

[0054] While the above description describes many specifications, these should not be construed as limitations on the scope of the invention, but rather as an exemplification of one preferred embodiment thereof. Many other variations are possible.

[0055] Lenses which enable wide angle light segmentation at the pixel level can be designed in many configurations and in series using multiple elements, shapes and gradient indices. Light can be directed by a lens to form a series of focal points along a focal plane instead of a along a focal curve. A fiber optic element can be replaced by a light pipe with internal reflection means that performs substantially equivalently. Photo diodes and LED's can be replaced by other light detecting and light producing means respectively. The mapping means can consist of a simple plug which connects prefabricated (and pre-mapped) segmented pixel array components designed to fit onto a particular asset.

[0056] The electronic embodiment segmented pixel receiving array (trajectory specific Photo diode array) can be used as input for a video recording and storage means. (This is a novel camera application of the present invention.) The electronic embodiment segmented pixel sending array (trajectory specific LED array) can be used as an output means for displaying video images which enable multiple users in different positions to view different perspectives simultaneously on a single video display device. Alternately, one viewer moving around relative to the display will see different images as they would moving around in the real world. (This is a novel video display application of the present invention.)

[0057] The fiber optic embodiment segmented pixel receiving array (trajectory specific fiber array) can be used as input for a video recording and storage means. (This is a novel camera application of the present invention.) The fiber optic embodiment segmented pixel sending array (trajectory specific fiber array) can be used as an output means for displaying video images which enable multiple users in different positions to view different perspectives simultaneously on a single video display device. Alternately, one viewer moving around relative to the display will see different images as they would moving around in the real world. (This is a novel video display application of the present invention.)
---------------------------------------------