Duncan’s taxonomy is a classification of computer architectures , proposed by Ralph Duncan in 1990.  Duncan proposed modifications to Flynn’s taxonomy  to include pipelined vector processes. 
The taxonomy was developed during 1988-1990 and was first published in 1990.
This category includes all the parallel architectures that coordinate competitor execution in lockstep fashion and do so via mechanisms such as global clocks, central control units or vector unit controllers. Further subdivision of this category is made primarily on the basis of the synchronization mechanism. 
Pipelined vector processors
Pipelined vector processors are characterized by a sequential stream of array or vector elements.  Parallelism is provided by the pipelining in individual functional units described above, as well as by operating multiple units of this kind in parallel and by chaining the output of one unit to another unit as input. 
Vector architectures that stream vector elements into functional units from special vector registers are termed register-to-register architectures, while those that feed the functional units of the memory chips are designated as memory-to-memory architectures.  Early examples of register-to-register architectures from the 1960s and early 1970s include the Cray-1  and Fujitsu VP-200, while the Control Data Cyber 205 and Texas Advanced Scientific Computer Instruments  are early examples of memory-to-memory vector architectures.
The late 1980s and early 1990s saw the introduction of vector architectures, such as the Cray Y-MP / 4 and Nippon Electric Corporation SX-3 that supported 4-10 vector processors with shared memory (see NEC SX architecture ).
This scheme uses the SIMD (Single Instruction Stream, Multiple Data Stream) category from Flynn’s Taxonomy as a root class for Processor Array and Associative Memory subclasses. SIMD architectures  are characterized by having a control unit broadcast a common instruction to all processing elements, which executes in a different way. Common features include the ability to individual processors to disable an instruction and the ability to propagate statement results to immediate neighbors over an interconnection network.
Systolic arrays, proposed during the 1980s  are multiprocessors in which data and partial results are rhythmically pumped from processor to processor through a regular, local interconnection network.  Systolic architectures use a global clock and explicit timing delays to synchronize data from processor to processor.  Each processor in a systemic system executes an invariant sequence of instructions. 
Based on Flynn’s Multiple-Instruction-Multiple-Data Streams terminology, this category spans a wide spectrum of architectures in which processors execute multiple instruction sequences on (potentially) dissimilar data streams without strict synchronization. Although both instruction and data streams can be different, they need not be. Thus, MIMD architectures can be run in different stages and at any given time. This category is subdivided further primarily on the basis of memory organization. 
The MIMD-Based Paradigm is a subsystem in the field of architectural design and structural considerations. Thus, the design of dataflow architectures and reduction machines is as much the product of its distinctive performance paradigm as it is a product of connecting processors and memories in MIMD fashion. The categories are divided by these paradigms. 
MIMD / SIMD hybrid
- ^ Jump up to:a b c d e f g Duncan, Ralph, “A Survey of Parallel Computer Architectures”, IEEE Computer. February 1990, pp. 5-16.
- Jump up^ Flynn, MJ, “Very High Speed Computing Systems”, Proc. IEEE. Flight. 54, 1966, pp.1901-1909.
- Jump up^ Introduction to Parallel Algorithms
- ^ Jump up to:a b Hwang, K., ed. Tutorial Supercomputers: Design and Applications. Computer Society Press, Los Alamitos, California, 1984, esp. chapters 1 and 2.
- Jump up^ Russell, RM, “The CRAY-1 Computer System,” Comm. ACM, Jan. 1978, pp. 63-72.
- Jump up^ Watson, WJ,The ASC: a Highly Modular Flexible Super Computer Architecture,Proc. AFIPS Fall Joint Computer Conference, 1972, pp. 221-228.
- Jump up^ Michael and Thomas Jurczyk Schwederski, “SIMD-Processing: Concepts and Systems”, pp. 649-679 in Parallel and Distributed Computing Handbook, A. Zomaya, ed., McGraw-Hill, 1996.
- ^ Jump up to:a b Kung, HT, “Why Systolic Arrays?” Computer, Vol. 15, No. 1, Jan. 1982, pp. 37-46.