Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/76991
Título: Improving memory latency aware fetch policies for SMT processors
Autores/as: Cazorla, Francisco J.
Fernández García, Enrique 
Ramírez, Alex
Valero, Mateo
Clasificación UNESCO: 3304 Tecnología de los ordenadores
330412 Dispositivos de control
Palabras clave: SMT
Multithreading
Fetch Policy
Long Latency Loads
Load Miss Predictors
Fecha de publicación: 2003
Editor/a: Springer 
Publicación seriada: Lecture Notes in Computer Science 
Conferencia: 5th International Symposium on High Performance Computing/3rd International Workshop on OpenMP: Experiences and Implementations (WOMPEI 2003) 
Resumen: In SMT processors several threads run simultaneously to increase available ILP, sharing but competing for resources. The instruction fetch policy plays a key role, determining how shared resources are allocated.When a thread experiences an L2 miss, critical resources can be monopolized for a long time choking the execution of the remaining threads. A primary task of the instruction fetch policy is to prevent this situation. In this paper we propose novel improved versions of the three best published policies addressing this problem. bur policies significantly enhance the original ones in throughput, and fairness, also reducing the energy consumption.
URI: http://hdl.handle.net/10553/76991
ISBN: 978-3-540-20359-9
ISSN: 0302-9743
Fuente: High Performance Computing. ISHPC 2003. Lecture Notes in Computer Science [ISSN 0302-9743], v. 2858, p. 70-85, (2003)
Colección:Actas de congresos
miniatura
pdf
Adobe PDF (211,29 kB)
Vista completa

Citas SCOPUSTM   

18
actualizado el 15-dic-2024

Citas de WEB OF SCIENCETM
Citations

7
actualizado el 25-feb-2024

Visitas

80
actualizado el 04-may-2024

Descargas

66
actualizado el 04-may-2024

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.