322 361 Computer Systems Architecture 7 MultiProcessors Architecture

  • Slides: 74
Download presentation
322 361 Computer Systems Architecture บทท 7 Multi-Processors Architecture

322 361 Computer Systems Architecture บทท 7 Multi-Processors Architecture

เอกสารอางอง Hayes, John P. Computer Architecture and Organization 3 rd ed. Malaysia: Mc. Graw-Hill,

เอกสารอางอง Hayes, John P. Computer Architecture and Organization 3 rd ed. Malaysia: Mc. Graw-Hill, 1998. P. 550 - 566. http: //en. wikipedia. org Stallings, William Computer Organization and Architecture : designing for performance 5 th ed. New. Jersey : Prentice-Hall, 2000. P. 621 – 667.

Taxonomy of Parallel Processor Architectures

Taxonomy of Parallel Processor Architectures

����� High Performance Computing v Supercomputer v Computer Cluster v Grid Computing v Multi-Core

����� High Performance Computing v Supercomputer v Computer Cluster v Grid Computing v Multi-Core Technology 6

High Performance Computing • The term is most commonly associated with computing used for

High Performance Computing • The term is most commonly associated with computing used for scientific research. A related term, Highperformance technical computing (HPTC), generally refers to the engineering applications of cluster-based computing (such as computational fluid dynamics and the building and testing of virtual prototypes). Recently, HPC has come to be applied to business uses of cluster-based supercomputers, such as data warehouses, line-of-business applications (LOB) and transaction processing. • HPC is sometimes used as a synonym for supercomputing; but in other contexts, "supercomputer" is used to refer to a more powerful subset of "high performance computers, " and the term "supercomputing" becomes a subset of "high performance computing. " The potentially confusing overlap of these usages is apparent. 7

Supercomputers introduced in the 1960 s were designed primarily by Seymour Cray at Control

Supercomputers introduced in the 1960 s were designed primarily by Seymour Cray at Control Data Corporation (CDC), CDC and led the market into the 1970 s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985– 1990). In the 1980 s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990 s "supercomputer market crash".

Control Data Corporation http: //ed-thelen. org/comp-hist/vs-cdc-6600. html v CDC 6600 (1965) (1 MFLOPs) 9

Control Data Corporation http: //ed-thelen. org/comp-hist/vs-cdc-6600. html v CDC 6600 (1965) (1 MFLOPs) 9

Control Data Corporation http: //ed-thelen. org/comp-hist/vs-cdc-6600. html v CDC 7600 (1970) (10 MFLOPs, PP

Control Data Corporation http: //ed-thelen. org/comp-hist/vs-cdc-6600. html v CDC 7600 (1970) (10 MFLOPs, PP : Peripheral Processor) v v CDC 8600(1971) (10 times of 7600) CDC Cyber 170, 180, 200(1974) (vector processor, 4 pipelines, 200 MFLOPs) 10

Mini. Computers DEC : Digital Equipment Corporation PDP-8 (12 -Bits Instructions, words( v PDP-10

Mini. Computers DEC : Digital Equipment Corporation PDP-8 (12 -Bits Instructions, words( v PDP-10 (36 -Bits Instructions, words( v PDP 16 -Bit words, dynamic )11 -Instruction) v VAX 11/XXX Series v DEC Alpha 1000, 2000 v 11

Mini. Computers SUN SPARC SUN Microsystems Inc. v RISC I, II (Berkley RISC) Ultra-SPARC

Mini. Computers SUN SPARC SUN Microsystems Inc. v RISC I, II (Berkley RISC) Ultra-SPARC I, III, IV v v 12

MIPS Technology Inc. MIP I 32 -Bits Internal-Ex. Data, Address Path, Registers MIPS R

MIPS Technology Inc. MIP I 32 -Bits Internal-Ex. Data, Address Path, Registers MIPS R 2000 (1985) v MIPS R 3000 (1988) v MIP II v MIPS R 6000 13

MIP III 64 -Bits Int. -Ext. Data, Address Path, Registers v MIPS R 4000

MIP III 64 -Bits Int. -Ext. Data, Address Path, Registers v MIPS R 4000 (1991) MIP IV v MIPS R 8000 (1994) R 10000 (1996) v R 12000 (1998), R 14000 (2001) v R 16000 (2002), R 24 K (2003) 2007: MIPS Technologies acquires Portugal-based mixed-signal intellectual property company Chipidea May 8, 2009: Chipidea is sold to Synopsys. 14

Supercomputer In the 1970 s 1970 most supercomputers were dedicated to running a vector

Supercomputer In the 1970 s 1970 most supercomputers were dedicated to running a vector processor, processor and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980 s mid-1980 saw machines with a modest number of vector processors working in parallel to become the standard. The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies.

Cray Inc. http: //www. cray. com v History § Cray Research 1972 by Seymour

Cray Inc. http: //www. cray. com v History § Cray Research 1972 by Seymour Cray § 1995 Cray Computer Corporation and bought by SGI in the next year. § 2000 merge with Tera computer company to Cray Inc. 16

Cray Research (1972 -2000( v v v v v Cray-1 Cray X-MP Cray-2 Cray

Cray Research (1972 -2000( v v v v v Cray-1 Cray X-MP Cray-2 Cray Y-MP Cray T 3 D Cray T 3 E Cray C 90 Cray T 90 Cray J 90 § MARQUISE v v v Cray Cray SGI in 1996 -2000 MTA-2 X-1 XD-1 XT 3 XT 4 XT 5 17

Cray Research Inc. http//: www. cray. com v CRAY- 1 TM (12 -pipeline processor,

Cray Research Inc. http//: www. cray. com v CRAY- 1 TM (12 -pipeline processor, 160 MIPS, 1976) v CRAY X-MPTM )multiprocessor supercomputer 19 , 82( v CRAY- 2 TM 19) 85( v CRAY Y-MP®, CRAY Y-MP ELTM )2. 3 GFLOPs, 1988( v CRAY T 3 D , CRAY T 3 E, T 90 TM )MPP: Massively Parallel Processing, 1993( http: //www. cray. com/about_cray/history. html 18

19

19

Cray Research Inc. http: //www. cray. com Cray X 1 E v Cray XT

Cray Research Inc. http: //www. cray. com Cray X 1 E v Cray XT 3, XT 4, XT 5 v Cray XD 1 (2004) v. Cray XMT (2006) v. Cray CX 1 (2008) with Intel 2 -4 Core v. Cray XT 5 (2009) v with more 224, 000 processing cores, 1. 75 petaflops 20

Cray X 1 E supercomputer combines the processor performance of traditional vector systems with

Cray X 1 E supercomputer combines the processor performance of traditional vector systems with the scalability of microprocessor-based architectures. High performance interconnect and memory subsystems allow Cray X 1 E to scale from 16 to 8, 192 processors, delivering up to 147 TFLOPS in a single system. The Cray X 1 E and its predecessor, the Cray X 1™, are the first vector systems designed to scale to thousands of processors in a single system image. 21

Cray XD 1 system purchased to date. Equipped with 336 Dual. Core AMD Opteron™

Cray XD 1 system purchased to date. Equipped with 336 Dual. Core AMD Opteron™ Opteron processors (672 cores), the supercomputer will be used by Rice researchers for studies that include computer science, biophysics, computational mathematics, earth sciences and cognitive neuroscience. 22

Cray XT 3™ supercomputer, purpose-built to meet the special needs of capability class HPC

Cray XT 3™ supercomputer, purpose-built to meet the special needs of capability class HPC applications, offers a new level of scalable computing. 23

Cray XT 4 (codenamed Hood ) is an updated version of the Cray XT

Cray XT 4 (codenamed Hood ) is an updated version of the Cray XT 3 supercomputer, released on November 18, 2006. It includes an updated version of the Sea. Star interconnect router called Sea. Star 2, processor sockets for Socket AM 2 Opteron processors, and 240 -pin unbuffered DDR 2 memory, also includes support for FPGA coprocessors that plug into riser cards in the Service and IO blades. The interconnect, cabinet, system software and programming environment remain unchanged from the Cray XT 3. 24

Red Storm Compute Board 4 DIMM Slots Redundant VRMs L 0 RAS Computer CRAY

Red Storm Compute Board 4 DIMM Slots Redundant VRMs L 0 RAS Computer CRAY Seastar™ 25 Slide from David Harper, John Feo from Cray Inc

26

26

27

27

Cray XT 5 h, XT 5 Cray XT 5 is an updated version of

Cray XT 5 h, XT 5 Cray XT 5 is an updated version of the Cray XT 4 supercomputer, launched on November 6, 2007. It includes a faster version of the XT 4's Sea. Star 2 interconnect router called Sea. Star 2+, and can be configured either with XT 4 compute blades, which have four dual-core AMD Opteron processor sockets, or XT 5 blades, with eight sockets supporting dual or quad-core Opterons. The XT 5 h (hybrid) variant also includes support for Cray X 2 vector processor blades, and Cray XR 1 blades which combine Opterons with FPGA-based Reconfigurable Processor Units (RPUs) provided by DRC Computer Corporation. The XT 5 retains the same UNICOS/lc operating system of the XT 4. 28

Kraken, a Cray XT 5 supercomputer at Oak Ridge National Laboratory Jaguar underwent an

Kraken, a Cray XT 5 supercomputer at Oak Ridge National Laboratory Jaguar underwent an upgrade to 224, 256 cores in 2009, after which its performance jumped to 1. 75 petaflops, taking it to the number one position in the 34 th edition of the TOP 500 list in fall 2009 Sea. Star 2+ Interconnect can be configured either with XT 4 compute blades, which have four dual-core AMD Opteron processor sockets, or XT 5 blades, with eight sockets supporting dual or quad-core Opterons. The XT 5 uses a 3 -dimensional torus network topology. The XT 5 family run the Cray Linux Environment, formerly known as UNICOS/lc This incorporates SUSE Linux Enterprise Server and Cray's Compute Node Linux. 29

30

30

Cray XT 5 h HPC Workflow 31

Cray XT 5 h HPC Workflow 31

32

32

33

33

34

34

Supercomputer Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as

Supercomputer Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, Hewlett-Packard who had purchased many of the 1980 s companies to gain their experience. As of July 2009, the IBM Roadrunner, located at Los Alamos National Laboratory, is the fastest supercomputer in the world.

Supercomputer In the later 1980 s 1980 and 1990 s, 1990 attention turned from

Supercomputer In the later 1980 s 1980 and 1990 s, 1990 attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the Power. PC, Power. PC Opteron, Opteron or Xeon, Xeon and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

http//: www. top 500. org June 2008 37

http//: www. top 500. org June 2008 37

Top 10 positions of the 33 rd TOP 500 List released during the ISC

Top 10 positions of the 33 rd TOP 500 List released during the ISC 09 conference, June 23, 2009 in Hamburg, Germany.

Top 10 positions of the 33 rd TOP 500 List released during the ISC

Top 10 positions of the 33 rd TOP 500 List released during the ISC 10 conference, May 31, 2010 in Hamburg, Germany.

The systems ranked #1 since 1993 Cray Jaguar (since November 2009) IBM Roadrunner (June

The systems ranked #1 since 1993 Cray Jaguar (since November 2009) IBM Roadrunner (June 2008 – November 2009) IBM Blue Gene/L (November 2004 – June 2008) NEC Earth Simulator (June 2002 – November 2004) IBM ASCI White (November 2000 – June 2002) Intel ASCI Red (June 1997 – November 2000) Hitachi CP-PACS (November 1996 – June 1997) Hitachi SR 2201 (June 1996 – November 1996) Fujitsu Numerical Wind Tunnel (November 1994 – June 1996) Intel Paragon XP/S 140 (June 1994 – November 1994) Fujitsu Numerical Wind Tunnel (November 1993 – June 1994) TMC CM-5 (June 1993 – November 1993) 40

41

41

http: //www. green 500. org

http: //www. green 500. org

The Green 500 Listed below are the June 2010 The Green 500's energy-efficient supercomputers

The Green 500 Listed below are the June 2010 The Green 500's energy-efficient supercomputers ranked

IBM Blade. Center QS 22 Cell Broadband Engine™ Architecture Features two high-performance IBM Power.

IBM Blade. Center QS 22 Cell Broadband Engine™ Architecture Features two high-performance IBM Power. XCell 8 i processors Up to 32 GB DDR II memory, Dual Gigabit Ethernet Optional Dual-port 4 x Infini. Band® HCA connected through PCI-Express Optional Serial Attached SCSI daughter cards connected through PCI-X Optional 8 GB u. FDM Flash Drive (note second half of 2008 availability date) Optional I/O buffer memory DIMMs (up to 2 GB, 2 x 1 GB)

Dates Sponsors operational 2008, final completion 2009 IBM, United States National Nuclear Security Administration,

Dates Sponsors operational 2008, final completion 2009 IBM, United States National Nuclear Security Administration, United Operators States Location Los Alamos National Laboratory, United States 12, 960 IBM Power. XCell 8 i CPUs, 6, 480 AMD Opteron Architecture dual-core processors, Infiniband, Linux Power 2. 35 MW Space 296 racks, 6, 000 sq ft (560 m 2) Memory 103. 6 Ti. B Speed 1. 7 petaflops (peak) Cost US$133 M Ranking TOP 500: 1, June 2008 Purpose Modeling the decay of the U. S. nuclear arsenal. First TOP 500 Linpack sustained 1. 0 petaflops, May 25, Legacy 2008 Web site http: //www. lanl. gov/roadrunner/

In 2006, Department of Energy’s System Name Site National Nuclear Security Administration selected Los

In 2006, Department of Energy’s System Name Site National Nuclear Security Administration selected Los System Family Alamos National Laboratory as System Model the development site for Roadrunner, Roadrunner named after the New Mexico state bird, cost about $100 Computer million. First “hybrid” supercomputer – Vendor one powerful enough to operate at Application one petaflop. That’s twice as fast asarea the current No. 1 rated IBM Blue Installation Gene system at Lawrence Year Livermore National Lab – itself Operating nearly three times faster than the System leading contenders on the current Interconnect TOP 500 list of worldwide Processor supercomputers. Roadrunner DOE/NNSA/LANL IBM Cluster Blade. Center QS 22/LS 21 Cluster, Power. XCell 8 i 3. 2 Ghz / Opteron DC 1. 8 GHz , Voltaire Infiniband IBM Not Specified 2008 Linux Infiniband Power. XCell 8 i 3200 MHz (12. 8 GFlops)

Specification Form Factor Single-wide blade server for Blade. Center Processors 3. 2 GHz IBM

Specification Form Factor Single-wide blade server for Blade. Center Processors 3. 2 GHz IBM Power. XCell 8 i Processors Number of Processors Two standard, each with one PPE core and eight enhanced double precision (e. DP) SPE cores L 2 Cache 512 KB per IBM Power. XCell 8 i Processor, plus 256 KB of local store memory for each e. DP SPE Memory Up to 32 GB (16 GB per processor) Internal Disk Storage Optional 8 GB modular flash drive Dual Gigabit Ethernet Networking Serial Attached SCSI (SAS) daughter card connected I/O Upgrade via PCI-X (CFFv) Optional Connectivity Dual-port Infini. Band 4 x HCA connected via PCIExpress (SFF) Operating Systems Red Hat Enterprise Linux 5. 2 (upon availability) Warranty 3 -year

The RIKEN MDGRAPE-3 supercomputer MDGRAPE-3 is an ultra-high performance Petascale supercomputer system developed by

The RIKEN MDGRAPE-3 supercomputer MDGRAPE-3 is an ultra-high performance Petascale supercomputer system developed by the RIKEN research institute in Japan. It is a special purpose system built for molecular dynamics simulations, especially protein structure prediction. MDGRAPE-3 consists of 201 units of 24 custom MDGRAPE-3 chips (4808 total), total) plus additional Dual-Core Intel Xeon processors (codename "Dempsey") which serve as host machines. In June 2006 RIKEN announced its completion. It’s more than three times faster than the 2006 version of the IBM Blue Gene/L system, which then led the TOP 500 list of supercomputers at 0. 28 Peta. FLOPS. Because it's not a general -purpose machine capable of running the LINPACK benchmark, MDGRAPE-3 does not qualify for the TOP 500 list.

Computer Cluster A group of linked computers, working together closely so that in many

Computer Cluster A group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more costeffective than single computers of comparable speed or availability Often clusters are used for primarily computational purposes, rather than handling IO-oriented operations such as web service or databases.

Computer Cluster For instance, a cluster might support computational simulations of weather or vehicle

Computer Cluster For instance, a cluster might support computational simulations of weather or vehicle crashes. The primary distinction within compute clusters is how tightly-coupled the individual nodes are. For instance, a single compute job may require frequent communication among nodes - this implies that the cluster shares a dedicated network, is densely located, and probably has homogenous nodes. This cluster design is usually referred to as Beowulf Cluster. The other extreme is where a compute job uses one or few nodes, and needs little or no inter-node communication. This latter category is sometimes called "Grid" Grid computing. Tightly-coupled compute clusters are designed for work that might traditionally have been called "supercomputing". supercomputing Middleware such as MPI (Message Passing Interface) or PVM (Parallel Virtual Machine) permits compute clustering programs to be portable to a wide variety of clusters.

NASA's Columbia supercomputer is built from 20 SGI Altix 3, 000 nodes each powered

NASA's Columbia supercomputer is built from 20 SGI Altix 3, 000 nodes each powered by 512 Intel Itanium 2 processors bringing the total to 10, 240 processors Columbia is housed at the NASA Advanced Supercomputing facility in Mountain View , The Columbia Supercomputer at NASA's California. It is running at 51. 87 teraflops, or 51. 87 trillion floating-point calculations per second. Advanced Supercomputing Facility at Ames Research Center It has 20 TB. of RAM, 440 TB. of storage, and 10 PB. of archive storage. It was named in honor of the crew STS 107, who were killed in the Columbia disaster. Connected together with a Voltaire Infini. Band ISR 9288 port switch with transfer speeds of up to 10 gigabits (or 1250 megabytes) per second, 52 10 gigabit Ethernet and multiple 1 gigabit Ethernet nodes.

An example of a computer cluster--this is a Silicon Graphics Cluster-SGI NASA's Columbia supercomputer,

An example of a computer cluster--this is a Silicon Graphics Cluster-SGI NASA's Columbia supercomputer, installed in 2004, is a 10, 240 -microprocessor cluster of twenty Altix 3000 systems, each with 512 microprocessors, interconnected with Infini. Band

Based on SGI; NUMAflex™ architecture - 20 SGI Altix™ 3700 superclusters, each with 512

Based on SGI; NUMAflex™ architecture - 20 SGI Altix™ 3700 superclusters, each with 512 processors - Global shared memory across 512 processors 10, 240 Intel Itanium 2 processors - processor speed: 1. 5 GHz. - cache: 6 MB. - 1 terabyte of memory per 512 processors, 20 TB. total memory Operating Environment - Linux-based operating system - PBS Pro™ job scheduler - Intel Fortran/C/C++ compiler - SGI Pro. Pack™ 3. 2 software Interconnect - SGI NUMAlink - Infini. Band network - 10 Gbits Ethernet - 1 Gbits Ethernet Storage - Online: 440 TB. of Fibre Channel RAID storage - Archive storage capacity: 10 petabytes 54

Grid Computing Grid computing (or the use of computational grids) grids is the combination

Grid Computing Grid computing (or the use of computational grids) grids is the combination of computer resources from multiple administrative domains applied to a common task, usually to a scientific, technical or business problem that requires a great number of computer processing cycles or the need to process large amounts of data Grid computing is distributed, large-scale cluster computing, computing as well as a form of network-distributed parallel processing. The size of grid computing may vary from being small — confined to a network of computer workstations within a corporation, for example — to being large, public collaboration across many companies and networks

Grid Computing What distinguishes grid computing from conventional cluster computing systems is that grids

Grid Computing What distinguishes grid computing from conventional cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed Also, while a computing grid may be dedicated to a specialized application, application it is often constructed with the aid of general-purpose grid software libraries and middleware.

Multi-Core Technology A multi-core CPU (or chip-level multiprocessor, CMP) CMP combines two or more

Multi-Core Technology A multi-core CPU (or chip-level multiprocessor, CMP) CMP combines two or more independent cores into a single package composed of a single integrated circuit (IC), called a die, or more dies packaged together Each "core" independently implements optimizations such as superscalar execution, pipelining and multithreading A system with n cores is effective when it is presented with n or more threads concurrently 57

Multi-Core Technology The amount of performance gained by the use of a multicore processor

Multi-Core Technology The amount of performance gained by the use of a multicore processor depends on the problem being solved and the algorithms used, as well as their implementation in software (Amdahl's law). law For so-called "embarrassingly parallel" problems, a dualcore processor with two cores at 2 GHz may perform very nearly as fast as a single core of 4 GHz. Other problems though may not yield so much speedup. This all assumes however that the software has been designed to take advantage of available parallelism If it hasn't, there will not be any speedup at all. *However, the processor will multitask better since it can run two programs at once, one on each core

Multi-Core Technology Athlon 64, 64 Athlon 64 FX and Athlon 64 X 2 Opteron,

Multi-Core Technology Athlon 64, 64 Athlon 64 FX and Athlon 64 X 2 Opteron, Opteron dual- and quad-core server/workstation processors. Phenom, Phenom triple, quad, Hex-core desktop processors. Semipro X 2, X 2 dual-core entry level processors. Turion 64 X 2, X 2 dual-core laptop processors. Radeon and Fire. Stream multi-core GPU/GPGPU(10 cores, 16 5 -issue wide superscalar stream processors per core) "Interlagos" Interlagos (32 nm, 16 -core) 8 Bulldozer modules (two dies as MCM) Hyper. Transport 3. 1 hexa Channel DDR 3 Socket G 34

Multi-Core Technology POWER 4 the world's first dual-core processor, released in 2001. POWER 5,

Multi-Core Technology POWER 4 the world's first dual-core processor, released in 2001. POWER 5, POWER 5 a dual-core processor, released in 2004. POWER 6, POWER 6 a dual-core processor, released in 2007. POWER 7 a 8 to 128 cores processor, released in 2010. use in PERCS, Blue Waters project. Power. PC 970 MP, 970 MP a dual-core processor, used in the Apple Power Mac G 5. Xenon, Xenon a triple-core, SMT: Simultaneous Multi-Threadingcapable, Power. PC microprocessor used in the Microsoft Xbox 360 game console.

Multi-Core Technology Celeron Dual-Core, the first dual-core processor for the budget/entry-level market. Core Duo,

Multi-Core Technology Celeron Dual-Core, the first dual-core processor for the budget/entry-level market. Core Duo, a dual-core processor. Core 2 Quad, 2 dual-core dies packaged in a multi-chip module. Core i 3, Core i 5 and Core i 7, a family of multi-core processors, the successor of the Core 2 Duo and the Core 2 Quad. Itanium 2, a dual-core processor. Pentium D, 2 single-core dies packaged in a multi-chip module. Pentium Extreme Edition, 2 single-core dies packaged in a multichip module.

Multi-Core Technology Pentium Dual-Core, a dual-core processor. Teraflops Research Chip (Polaris), a 3. 16

Multi-Core Technology Pentium Dual-Core, a dual-core processor. Teraflops Research Chip (Polaris), a 3. 16 GHz, 80 -core processor prototype, which the company says will be released within the next five years[8]. Xeon dual-, quad-, hexa-, and octo-core processors.

Multi-Core Technology Ultra. SPARC IV and Ultra. SPARC IV+, IV+ dual-core processors. Ultra. SPARC

Multi-Core Technology Ultra. SPARC IV and Ultra. SPARC IV+, IV+ dual-core processors. Ultra. SPARC T 1, T 1 an eight-core, 32 -thread processor. Ultra. SPARC T 2, T 2 an eight-core, 64 -concurrent-thread processor.

������ v. Optical Computer light travels about 30 cm, or one foot, in a

������ v. Optical Computer light travels about 30 cm, or one foot, in a nanosecond – and have a higher bandwidth. v. Quantum Computer A computer in which the time evolution of the state of the individual switching elements of the computer is governed by the laws of quantum mechanics. v. DNA or Molecular Computer DNA computers are faster and smaller than any other computer built so far. But DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are 64

Optical Computers work with binary, on or off, states. A completely optical computer requires

Optical Computers work with binary, on or off, states. A completely optical computer requires that one light beam can turn another on and off. This was first achieved with the photonic transistor, transistor invented in 1989 at the Rocky Mountain Research Center. This demonstration eventually created a growing interest in making photonic logic componentry utilizing light interference. Photonic computing is intended to use photons or light particles, produced by lasers, in place of electrons. Compared to electrons, photons are much faster – light travels about 30 cm, or one foot, in a nanosecond – and have a higher bandwidth

Optical Computer Light interference is very frequency sensitive. This means that a narrow band

Optical Computer Light interference is very frequency sensitive. This means that a narrow band of photon frequencies can be used to represent one bit in a binary number. Many of today's electronic computers use 64 or 128 bitposition logic. The visible light spectrum alone could enable 123 billion bit positions. Recent research shows promise in temporarily trapping light in crystals Trapping light is seen as a necessary element in replacing electron storage for computer logic. Recent years have seen the development of new conducting polymers which create transistor-like switches that are smaller, and 1, 000 times faster, than silicon transistors.

Optical Computer Optical switches switch optical wavelengths. 100 terabitper-second data-handling is expected within the

Optical Computer Optical switches switch optical wavelengths. 100 terabitper-second data-handling is expected within the decade. Existing technologies include: Ø micro-electro-mechanical systems, systems or MEMS, which use tiny mechanical parts such as mirrors. Ø Thermo-optics technology, technology derived from ink-jet technology, creates bubbles to deflect light. Ø liquid crystal display switching changes (e. g. , by filtering and rotating) the polarization states of the light. Ø acousto-optic modulator uses the acousto-optic effect to diffract and shift the frequency of light using sound waves (usually at radio-frequency). Ø photonic integrated circuits

Quantum Computer A future technology for designing computers based on quantum mechanics, the science

Quantum Computer A future technology for designing computers based on quantum mechanics, the science of atomic structure and function. It uses the "qubit, " or quantum bit, which can hold an infinite number of values. In 1999, the feasibility of such a computer was demonstrated by a collaboration of scientists at MIT, the University of California at Berkeley and Stanford University, which used a technique similar to MRI scans in hospitals. 68

Quantum Computer The concept is that the atoms can be made to perform higher

Quantum Computer The concept is that the atoms can be made to perform higher level gating functions rather than just be used to store 0 s and 1 s. It is believed that such a device can handle multiple operations simultaneously and can factor large numbers 10, 000 times faster than today's computers. In late 2001, researchers at IBM computed the factors of the number 15 using quantum techniques. Although there are gigantic hurdles to overcome, scientists believe this will be feasible some time in the future. If quantum computing were to come about, the world of cryptography would undergo a dramatic change. In a short amount of time, such a device could be used to find the secret keys to all encryption algorithms.

Quantum Computer A quantum computer is a device for computation that makes direct use

Quantum Computer A quantum computer is a device for computation that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. The basic principle behind quantum computation is that quantum properties can be used to represent data and perform operations on these data Although quantum computing is still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bit) Wikipedia

Quantum Computer Quantum computers are different from other computers such as DNA computers and

Quantum Computer Quantum computers are different from other computers such as DNA computers and traditional computers based on transistors. Some computing architectures such as optical computers may use classical superposition of electromagnetic waves. Without some specifically quantum mechanical resources such as entanglement, it is conjectured that an exponential advantage over classical computers is not possible. If large-scale quantum computers can be built, they will be able to solve certain problems much faster than any of our current classical computers (for example Shor's algorithm, Grover's algorithm, Deutsch-Jozsa algorithm). Wikipedia

Molecular Computer Molecular computers also called DNA computer are massively parallel computers taking advantage

Molecular Computer Molecular computers also called DNA computer are massively parallel computers taking advantage of the computational power of molecules (specifically biological). Molectronics specifically refers to the sub-field of physics which addresses the computational potential of atomic arrangements. In 2002, researchers from the Weizmann Institute of Science in Rehovot, Israel, unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips Biocomputers use systems of biologically derived molecules, such as DNA and proteins, to perform computational calculations involving storing,

Molecular Computer On April 28, 2004, Ehud Shapiro, Yaakov Benenson, Binyamin Gil, Uri Ben-Dor,

Molecular Computer On April 28, 2004, Ehud Shapiro, Yaakov Benenson, Binyamin Gil, Uri Ben-Dor, and Rivka Adar at the Weizmann Institute announced in the journal Nature that they had constructed a DNA computer This was coupled with an input and output module and is capable of diagnosing cancerous activity within a cell, and then releasing an anti-cancer drug upon diagnosis. MAYA-II (Molecular Array of YES and ANDNOT logic gates) is a DNA computer, based on DNA Stem Loop Controllers, developed by scientists at Columbia University and the University of New Mexico

The End Coming Soon Chapter 8 Parallel Organization

The End Coming Soon Chapter 8 Parallel Organization