Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Method for Experimental Measurement of an Applications Memory Bus Usage
University West, Department of Economics and IT, Division of Computer Science and Informatics.ORCID iD: 0000-0001-7232-0079
University West, Department of Economics and IT, Division of Computer Science and Informatics.
2010 (English)In:   / [ed] Hamid Arabnia, CSREA Press , 2010Conference paper, Published paper (Refereed)
Abstract [en]

The disproportion between processor and memory bus capacities has increased constantly during the last decades. With the introduction of multi-core processors the memory bus capacity is divided between the simultaneously executing processes (cores). The memory bus capacity directly affects the number of applications that can be executed simultaneously at its full potential. Thus, against this backdrop it becomes important to estimate how the limitation of the memory bus effects the applications performance. Towards this end we introduce a method and a tool for experimental estimation of an applications memory requirement as well as the impact of sharing the memory bus has on the execution times. The tool enables black-box approximate profiling of an applications memory bus usage during execution. It executes entirely in user-space and does not require access to the application code, only the binary. 

Place, publisher, year, edition, pages
CSREA Press , 2010.
Keyword [en]
Memory, experimental measurement, Multi-core, tool, prediction
National Category
Computer Engineering
Research subject
ENGINEERING, Computer engineering
Identifiers
URN: urn:nbn:se:hv:diva-2445OAI: oai:DiVA.org:hv-2445DiVA: diva2:317185
Conference
The 2010 International Conference on Parallel and Distributed Processing Techniques and Applications
Available from: 2010-05-03 Created: 2010-05-03 Last updated: 2016-04-07Bibliographically approved
In thesis
1. A Slowdown Prediction Method to Improve Memory Aware Scheduling
Open this publication in new window or tab >>A Slowdown Prediction Method to Improve Memory Aware Scheduling
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Scientific and technological advances in the area of integrated circuits have allowed the performance of microprocessors to grow exponentially since the late 1960's. However, the imbalance between processor and memory bus capacity has increased in recent years. The increasing on-chip-parallelism of multi-core processors has turned the memory subsystem into a key factor for achieving high performance. When two or more processes share the memory subsystem their execution times typically increase, even at relatively low levels of memory traffic. Current research shows that a throughput increase of up to 40% is possible if the job-scheduler can minimizes the slowdown caused by memory contention in industrial multi-core systems such as high performance clusters, datacenters or clouds. In order to optimize the throughput the job-scheduler has to know how much slower the process will execute when co-scheduled on the same server as other processes. Consequently, unless the slowdown is known, or can be fairly well estimated, the scheduling becomes pure guesswork and the performance suffers. The central question addressed in this thesis is how the slowdown caused by memory traffic interference between processes executing on the same server can be predicted and to what extent. This thesis presents and evaluates a new slowdown prediction method which estimates how much longer a program will execute when co-scheduled on the same multi-core server as another program. The method measures how external memory traffic affects a program by generating different levels of synthetic memory traffic while observing the change in execution time. Based on the observations it makes a first order prediction of how much slowdown the program will experience when exposed to external memory traffic. Experimental results show that the method's predictions correlate well with the real measured slowdowns. Furthermore, it is shown that scheduling based on the new slowdown prediction method yields a higher throughput than three other techniques suggested for avoiding co-scheduling slowdowns caused by memory contention. Finally, a novel scheme is suggested to avoid some of the worst co-schedules, thus increasing the system throughput.

Place, publisher, year, edition, pages
Göteborg: Chalmers University of Technology, 2016. 19 p.
Series
Doktorsavhandlingar vid Chalmers tekniska högskola, Ny serie, ISSN 0346-718X ; 4050
Keyword
Multi-core processor, slowdown aware scheduling, memory bandwidth, resource contention, last level cache, co-scheduling, performance evaluation
National Category
Computer Systems Information Systems, Social aspects
Research subject
ENGINEERING, Computer engineering
Identifiers
urn:nbn:se:hv:diva-9300 (URN)978-91-7597-369-2 (ISBN)
Public defence
2016-04-19, EC, Hörsalsvägen 11, Chalmers, Göteborg, 10:00 (English)
Opponent
Supervisors
Available from: 2016-04-07 Created: 2016-04-07 Last updated: 2016-04-07Bibliographically approved

Open Access in DiVA

No full text

Other links

Konferensens hemsida

Search in DiVA

By author/editor
de Blanche, AndreasMankefors-Christiernin, Stefan
By organisation
Division of Computer Science and Informatics
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar

Total: 252 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf