Skip to main content

Currently Skimming:

Appendix D: Glossary and Acronym List
Pages 278-290

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 278...
... ASCI. Accelerated Strategic Computing Initiative, which provides simu lation and modeling capabilities and technologies as part of DOE/ NNSA Stockpile Stewardship Program.
From page 279...
... Caches aim to provide the illusion of a memory as large as the main computer memory with fast performance. They succeed in doing so if memory accesses have good temporal locality and good spatial locality.
From page 280...
... composite theoretical performance. CTP is a measure of the perfor mance of a computer that is calculated using a formula that combines various system parameters.
From page 281...
... The memory access time depends on the memory access pattern; row access time (or row ac cess latency) is the worst-case access time, for irregular accesses.
From page 282...
... A custom processor designed to provide significantly higher effective memory bandwidth than commodity processors normally provide. high-end computing (HEC)
From page 283...
... A measure of delay. Memory latency is the time needed to ac cess data in memory; global communication latency is the time needed to effect a communication between two nodes through the intercon nect.
From page 284...
... multithreaded processor. A processor that executes concurrently or si multaneously multiple threads, where the threads share computa tional resources (as distinct from a multiprocessor, where threads do not share computational resources)
From page 285...
... NNSA. National Nuclear Security Administration, the organization within DOE that manages the Stockpile Stewardship Program that is responsible for manufacturing, maintaining, refurbishing, surveilling, and dismantling the nuclear weapons stockpile.
From page 286...
... Parallel efficiency is an indi cation of scalability; it normally decreases as the number of proces sors increases, indicating a diminishing marginal return as more processors are applied to the solution of one problem. parallel speedup.
From page 287...
... that involve sparse matrices, where many entries are zero. Sparse linear algebra codes use data structures that store only the nonzero matrix entries, thus saving stor
From page 288...
... temporal locality ensures that caches can effectively capture most memory accesses, since most accesses will be to data that were accessed recently in the past and that reside in the cache.
From page 289...
... Total cost of ownership can be significantly higher than the purchase cost, and systems with a lower purchase cost can have higher total cost of ownership.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.