AIR TRAFFIC CONTROL: How many more air disasters? Nigel Cook, Electronics
World, January 2003, p12 In July last year, problems with the existing system were highlighted by the tragic death of 71 people, including 50 school children, due to the confusion when Swiss air traffic control noticed too late that a Russian passenger jet and a Boeing 757 were on a collision path. The processing of extensive radar and other aircraft input information for European air space is a very big challenge, requiring a reliable system to warn air traffic controllers of impending disaster. So why has Ivor Catt's computer solution for Air Traffic Control been ignored by the authorities for 13 years? Nigel Cook reports. In Electronics World, March 1989, a contributor explained the longterm future of digital electronics. This is a system in which computers are networked adjacently, like places in the real world, but unlike the internet. An adjacent processor network is the ingenious solution proposed for the problem of Air Traffic Control: a grid network of computer processors, each automatically backed-up, and each only responsible for the air space of a fixed area. Figure 1 shows the new processing system, the Kernel computer, as proposed for safe, automated air traffic control.
This
system is capable of reliably tracking a vast air space and could
automatically alert human operators whenever the slant distance
between any two adjacent aircraft decreased past the safety factor.
Alternatively, if the air traffic controllers were busy or asleep,
it could also send an automatic warning message directly to the
pilot of the aircraft that needs to change course. The existing
suggestions are currently based on software solutions, which are
unsatisfactory. For such a life-and-death application, there is
a need for reliability through redundancy, and a single processor
system does not fit the bill. System freezes must be eliminated
in principle. Tracking aircraft individually by reliably using radar
and other inputs requires massive processing, and a safe international
system must withstand the rigours of continuous use for long periods,
without any software crashes or system overheat failure. The only
practicable way to do this is through using Ivor Catt's adjacent
processor network. Originally
suggested for a range of problems, including accurate prediction
of global warming and long-range weather, the scheme proposed by
Ivor was patented as the Kernel Machine, an array of 1,000 x 1,000
= 1,000,000 processors, each with its own memory and program, made
using wafer-scale integration with 1000 silicon wafers in a 32 by
32 wafer array. The data transfer rate between adjacent processors
is 100 Mb/s. Ivor
Catt's original computer development is the Catt Spiral (Wireless
World, July 1981), in which Sir Clive Sinclair's offshoot computer
company, Anamartic, invested £16 million. Although revolutionary,
it came to market and was highly praised by electronics journals.
The technology is proven by the successful introduction in 1989
of a solid-state memory called the Wafer Stack, based on a Catt
patent. This received the `Product of the Year Award' from the U.S.
journal Electronic Products, in January 1990. It is
a wafer scale integration technology, which self-creates a workable
computer from a single silicon wafer by automatically testing each
chip on the wafer, and linking up a spiral of working chips while
by-passing defective ones. This system is as big an advance as the
leap from transistor to compact IC (which was invented in 1959),
because the whole wafer is used without having to be divided up
into individual chips for separate testing and packaging. By having
the whole thing on a single silicon wafer, the time and energy in
separating, testing, and packaging the chips was saved, as well
as the need to mount them separately on circuit boards. By the time
Catt had completed his invention for wafer scale integration, he
was already working on the more advanced project, the Kernel Machine. In the
Sunday Times (12 March 1989, p. D14), journalist Jane Bird interviewed
Ivor Catt and described the exciting possibilities: "in air traffic
control, each processor in the array could correspond to a square
mile of airspace... weather forecasters could see at the press of
a button whether rain from the west would hit Lord's before the
end of cricket play." The Kernel machine versus P.C. thinking The primary
problem facing the Kernel Machine is the predominance of single-processor
computer solutions and the natural inclination of programmers to
force software fixes on to inappropriate hardware. Ivor
Catt has no sympathy with ideas to use his Kernel Machine for chemistry
or biology research. However, this sort of technology is vital for
simulation of all real-life systems, since they are all distributed
in space and time. Chemical molecule simulation for medical research
would become a practical alternative to brewing up compounds in
the lab, if such computers became available. It would help to find
better treatments for cancer. Modern
research on the brain shows that the neurons are interconnected
locally. Quite often the false notion is spread that the neocortex
of the brain is a type of `internet'. In reality, the billions of
neurons are each only connected to about 11,000 others, locally.
The network does not connect each cell to every other cell. This
allows it to represent the real world by a digital analogue of reality,
permitting interpretation of visual and other sensory information.
Each processor of the Kernel Machine is responsible for digitally
representing or simulating the events in a designated area of real
space. Certainly, the Kernel machine would be ideally suited to
properly interpret streamed video from a camera, permitting computers
to `see' properly. This would have obvious benefits for security
cameras, satellite spy and weather video, etc. Catt filed patents for the Kernel Machine in Europe (0 366 702 B 1, granted 19 Jan 1994) and the U.S.(5 055 774, granted 8 Oct 1991), a total patenting cost around the world of about £10,000. His earlier invention, the Catt Spiral, was patented in 1972 but only came to market 17 years later after £16 million of investment by Anamartic Plc. Patented design for the new kernel computer
Figure
2 shows how the Kernel patent differs from the Spiral in two important
ways. The Spiral design as utilised in the Anamartic memory wafer,
once it has been manufactured like an ordinary silicon wafer, is
set up as a whole wafer computer by sending test data into a chip
on the edge of the wafer. If that
chip works, it sends test data into another adjacent chip, which
in turn repeats the process: sidestepping faulty chips and automatically
linking up the good chips into a series network. Each chip that
works is therefore incorporated into a `Spiral' of working chips,
while each defective chip is bypassed. The result saves the labour
of dividing up the wafer, packaging the individual chips separately,
and soldering them separately on to circuit boards. It saves space,
time, and money. The problem
with the Catt Spiral is that by creating a spiral or series connected
memory, it causes time delays in sending and receiving data from
chips near the end of the spiral. Data can also be bottlenecked
in the spiral. The invention was innovative, and won awards; yet
by the time Sir Clive Sinclair was ready to begin production for
a massive wafer scale plug-in memory for computers, Ivor Catt was
already arguing that it was superseded by his later invention, the
Kernel machine. Born in 1935, Cambridge educated Catt is extremely
progressive. His immediate replacement of earlier patents of his
own when new developments arrive seems logical to him, although
it can disturb those who invested in the previous design which has
yet to make a profit. The adjacent
linking of chips into a two dimensional array in the Kernel Machine,
is so-named from the `kernels' in the corners of each chip which
allow networking through the chip even if it has errors and is not
used itself. Kernel computers are designed to have enough networking
to avoid all of the problems of the Spiral wafer. Kernel's built-in
`self repair' works by ignoring individual chips when they burn
out, the concept of reliability through redundancy. There are sufficient
spare chips available on each wafer to take over from failures. Catt's
intended scientific and commercial computing company calls for a
three-stage investment of £0.5m, £8m, and £12m, respectively. The
project outline states: "The scientific market and the commercial
market need to be aware that there are two fundamentally different
methods of computing: large, single processing engines performing
single tasks one at a time, and parallel systems where there is
an array of smaller engines that perform a series of tasks independently
of each other until they are brought together by some management
mechanism to give a result. The scientific market's major application
areas are: scientific and engineering computing; signal and image
processing and artificial intelligence. "
In the commercial world there are a number of application areas
where the application of very fast numerical processing is extremely
useful. As the limits of physical performance are now in sight for
semiconductors, the next level of performance will be achieved by
applying an array of processors to a particular task. To achieve
even better price/performance ratios than is presently available,
the architecture needs to be flexible enough to use any one of a
number of computer processor types. "Having
proven the technology and its ability to be applied to specific
operational areas, the company will set to licence the technology
within these application areas. The company will also develop intermediate
and peripheral products on its route to the major goal; that of
a parallel processing super-computer using patented technology. "In common
with all companies first entering a high technology market, this
company will make a loss during the initial stages. The various
stages of product development will be interposed with the marketing
of that development. It is anticipated that this will reduce the
negative cash flow impact inherent in an R&D environment. Industry
norms have been applied to the cost of sales, marketing and administration
expenditures, and to the capital costs." In order
to develop the software for the Kernel Computer, current computer
technology will be used, networked in the Kernel adjacent processor
array. Software, for all of the challenges facing the Kernel Computer,
can be tested and debugged on this inexpensive mockup. The next
phase will be the production of the first large scale super-computers
using the Kernel system of wafer-scale integration. Catt
comments: "The first Wafer Scale Integration (WSI) product, a solid
state disc called Wafer Stack, came to market in 1989, based on
`Catt Spiral'. We can now advance to a WSI array processor, the
Kernel machine, with one million processors giving one million million
operations per second. The Kernel machine, when built from an array
of 100 wafers, will retail for £500,000. The external control system
maps out the good and bad chips, and devises a strategy for building
a volatile, perfect square two-dimensional array of 1,000,000 processing
elements (PE's) out of a larger, imperfect array. Reliability is
achieved through redundancy; having spare PEs available. "The
project costs £20 million spread over four years. A proper figure
for profit in this market would be 20% of retail price. The $0.2Bn
turnover needed to justify the Kernel project is dwarfed by the
$50Bn world computer market." The Kernel array computer is the machine
of the future, replacing the single processor von Neumann machine
of the present day. published
in ELECTRONICS WORLD January 2003 pp12, 13, 14 |
||
|
||
LINKS: see earlier articles related to computers and air traffic control |