We investigate how memory consumption, dataset size and software performance correlate…

We investigate how memory consumption, dataset size and software performance correlate…
As I already mentioned in earlier posts, vectorization is the holy grail of software optimizations: if your hot loop is efficiently vectorized, it is pretty much running at fastest possible speed. So, it is definitely a goal worth pursuing, under two assumptions: (1) that your code has a hardware-friendly memory access pattern1 and (2) that…
We try to answer the question of why is quicksort faster than heapsort and then we dig deeper into these algorithms’ hardware efficiency. The goal: making them faster.
For all the engineers who like to tinker with software performance, vectorization is the holy grail: if it vectorizes, this means that it runs faster. Unfortunately, many times this is not the case, and the results of forcing vectorization by any means can mean lower performance. This happens when vectorization hits the memory wall: although…
We are exploring how class size and layout of its data members affect your program’s speed
This is the first article about hardware support for parallelization. We talk about SIMD, an extension almost every processor nowadays has that lets you speed up your program.
We investigate the performance impact of multithreading.
When processing (searching, inserting etc) your data structure, if you are accessing it in random-access fashion, the performance will suffer due to many data cache misses. Read on how to use the explicit data prefetching to speed up access to your data structure.
We will talk about expensive instructions in modern CPUs and how to avoid them to speed up your program.
If your program uses dynamic memory, its speed will depend on allocation time but also on memory access time. Here we investigate how memory access time depends on the memory layout of your data structure. We also investigate ways to speed up your program by laying out your data structure optimally.