Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/50504
Title: A latency-conscious SMT branch prediction architecture
Authors: Falcón, Ayose
Santana, Oliverio J. 
Ramirez, Alex
Valero, Mateo
UNESCO Clasification: 330406 Arquitectura de ordenadores
Keywords: Branch predictor delay
Decoupled predictor
Fetch engine
Predictor pipelining
SMT
Issue Date: 2004
Journal: International Journal of High Performance Computing and Networking 
Abstract: Executing multiple threads has proved to be an effective solution to partially hide latencies that appear in a processor. When a thread is stalled because of a long-latency operation is being processed, such as a memory access or a floating-point calculation, the processor can switch to another context so that another thread can take advantage of the idle resources. However, fetch stall conditions caused by a branch predictor delay are not hidden by current simultaneous multithreading (SMT) fetch designs, causing a performance drop due to the absence of instructions to execute. In this paper, we propose several solutions to reduce the effect of branch predictor delay in the performance of SMT processors. Firstly, we analyse the impact of varying the number of access ports. Secondly, we describe a decoupled implementation of an SMT fetch unit that helps to tolerate the predictor delay. Finally, we present an interthread pipelined branch predictor, based on creating a pipeline of interleaved predictions from different threads. Our results show that, combining all the proposed techniques, the performance obtained is similar to that obtained using an ideal, 1-cycle access branch predictor.
URI: http://hdl.handle.net/10553/50504
ISSN: 1740-0562
DOI: 10.1504/IJHPCN.2004.009264
Source: International Journal of High Performance Computing and Networking [ISSN 1740-0562], v. 2 (1), p. 11-21
Appears in Collections:Artículos
Show full item record

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.