Lecture 18 HardwareSoftware Codesign Embedded Computing Systems Mikko
Lecture 18: Hardware/Software Codesign Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte Based on slides and textbook from Wayne Wolf High Performance Embedded Computing © 2007 Elsevier
Topics n n n n Platforms. Performance analysis. Design representations. Hardware/software partitioning. Co-synthesis for general multiprocessors. Optimization concepts Simulation © 2006 Elsevier
Design platforms n Different levels of integration: q q PC + board. Custom board with CPU + FPGA or ASIC. Platform FPGA. System-on-chip. © 2006 Elsevier
CPU/accelerator architecture n n CPU is sometimes called host. Accelerator communicate via shared memory. q memory CPU May use DMA to communicate. accelerator © 2006 Elsevier
Example: Xilinx Virtex-4 n System-on-chip: q q n n FPGA fabric. Power. PC. On-chip RAM. Specialized I/O devices. FPGA fabric is connected to Power. PC bus. Micro. Blaze CPU can be added in FPGA fabric. © 2006 Elsevier
Example: WILDSTAR II Pro © 2006 Elsevier
Performance analysis n n Must analyze accelerator performance to determine system speedup. High-level synthesis helps: q q Use as estimator for accelerator performance. Use to implement accelerator. © 2006 Elsevier
Data path/controller architecture n n Data path performs regular operations, stores data in registers. Controller provides required sequencing. controller Data path © 2006 Elsevier
High-level synthesis n n High-level synthesis creates register-transfer description from behavioral description. Schedules and allocates: q q q n n Operators. Variables. Connections. Control step or time step is one cycle in system controller. Components may be selected from technology library. © 2006 Elsevier
Models n n Model as data flow graph. Critical path is set of nodes on path that determines schedule length. © 2006 Elsevier
Accelerator estimation n n How do we use high-level synthesis, etc. to estimate the performance of an accelerator? We have a behavioral description of the accelerator function. Need an estimate of the number of clock cycles. Need to evaluate a large number of candidate accelerator designs. q Can’t afford to synthesize them all. © 2006 Elsevier
Estimation methods n Hermann et al. used numerical methods. q n Henkel and Ernst used path-based scheduling. q q n Estimated incremental costs due to adding blocks to the accelerator. Cut CFDG into subgraphs: reduce loop iteration count; cut at large joins; divide into equal-sized pieces. Schedule each subgraph independently. Vahid and Gajski estimate controller and data path costs incrementally. © 2006 Elsevier
Single- vs. multi-threaded n One critical factor is available parallelism: q q n single-threaded/blocking: CPU waits for accelerator; multithreaded/non-blocking: CPU continues to execute along with accelerator. To multithread, CPU must have useful work to do. q But software must also support multithreading. © 2006 Elsevier
Total execution time n Single-threaded: n P 1 P 2 Multi-threaded: P 2 A 1 P 3 P 4 © 2006 Elsevier A 1
Execution time analysis n Single-threaded: q n Count execution time of all component processes. © 2006 Elsevier Multi-threaded: q Find longest path through execution.
Hardware-software partitioning n n n Partitioning methods usually allow more than one ASIC. Typically ignore CPU memory traffic in bus utilization estimates. Typically assume that CPU process blocks while waiting for ASIC. mem ASIC CPU ASIC © 2006 Elsevier
Synthesis tasks n n Scheduling: make sure that data is available when it is needed. Allocation: make sure that processes don’t compete for the PE. Partitioning: break operations into separate processes to increase parallelism, put serial operations in one process to reduce communication. Mapping: take PE, communication link characteristics into account. © 2006 Elsevier
Scheduling and allocation n Must schedule/allocate q q n P 2 P 1 computation communication P 3 Performance may vary greatly with allocation choice. P 1 CPU 1 © 2006 Elsevier P 2 P 3 ASIC 1
Problems in scheduling/allocation l l Can multiple processes execute concurrently? Is the performance granularity of available components fine enough to allow efficient search of the solution space? Do computation and communication requirements conflict? How accurately can we estimate performance? q q software custom ASICs © 2006 Elsevier
Partitioning example r=p 1(a, b); s=p 2(c, d); r = p 1(a, b); s = p 2(c, d); z = r + s; z=r+s before after © 2006 Elsevier
Problems in partitioning l l l At what level of granularity must partitioning be performed? How well can you partition the system without an allocation? How does communication overhead figure into partitioning? © 2006 Elsevier
Problems in mapping n n n Mapping and allocation are strongly connected when the components vary widely in performance. Software performance depends on bus configuration as well as CPU type. Mappings of PEs and communication links are closely related. © 2006 Elsevier
Program representations n n CDFG: single-threaded, executable, can extract some parallelism. Task graph: task-level parallelism, no operator-level detail. q n TGFF generates random task graphs. UNITY: based on parallel programming language. © 2006 Elsevier
Platform representations n Technology table describes PE, channel characteristics. Type Speed cost ARM 7 50 E 6 10 CPU time. Communication time. Cost. Power. MIPS 50 E 6 8 q q n PE 2 Multiprocessor connectivity graph describes PEs, channels. PE 1 PE 3 © 2006 Elsevier
Hardware/software partitioning assumptions n CPU type is known. q n Number of processing elements is known. q n Can determine software performance. Simplifies system-level performance analysis. Only one processing element can multi-task. q Simplifies system-level performance analysis. © 2006 Elsevier
Two early HW/SW partitioning systems n Vulcan: q q n Start with all tasks on accelerator. Move tasks to CPU to reduce cost. COSYMA: q q © 2006 Elsevier Start with all functions on CPU. Move functions to accelerator to improve performance.
Additional Co-synthesis Approaches n n n Vahid: Binary constraint search Co. Ware: communicating processes model Simulated annealing & Tabu search heuristics [Ele 96] LYCOS: CDFG representation [Mad 97] Several others in book (skim) © 2006 Elsevier
Multi-objective optimization n n Operations research provides notions for optimization functions with multiple objectives. Pareto optimality: optimal solution cannot be improved without making something else worse. © 2006 Elsevier
Large search space: Genetic algorithms n Modeled as: q q n Genes = strings of symbols. Mutations = changes to strings. Types of moves: q q q Reproduction makes a copy of a string. Mutation changes a string. Crossover interchanges parts of two strings. © 2006 Elsevier
Hardware/software co-simulation n Must connect models with different models of computation, different time scales. Simulation backplane manages communication. Becker et al. used PLI in Verilog-XL to add C code that communicates with software models, UNIX networking to connect hardware simulator. © 2006 Elsevier
Mentor Graphics Seamless n n Hardware modules described using standard HDLs. Software can be loaded as C or binary. Bus interface module connects hardware models to processor instruction set simulator. Coherent memory server manages shared memory. © 2006 Elsevier
Summary n n n n Platforms. Performance analysis. Design representations. Hardware/software partitioning. Co-synthesis for general multiprocessors. Optimization concepts Simulation © 2006 Elsevier
- Slides: 32