When an instruction depends on the previous instruction depends on the previous instructions… : long instruction dependency chains and performance

When an instruction depends on the previous instruction depends on the previous instructions… : long instruction dependency chains and performance

In this post we investigate long dependency chains: when an instruction depends on the previous instruction depends on the previous instruction… We want to see how long dependency chains lower CPU performance, and we want to measure the effect of interleaving two dependency chains (by interleaving two operations) reflects on software performance. Operations with long…

The memory subsystem from the viewpoint of software: how memory subsystem affects software performance 2/3

The memory subsystem from the viewpoint of software: how memory subsystem affects software performance 2/3

We continue the investigation from the previous post, trying to measure how the memory subsystem affects software performance. We write small programs (kernels) to quantify the effects of cache line, memory latency, TLB cache, cache conflicts, vectorization and branch prediction.

Crash course introduction to parallelism: Multithreading

Crash course introduction to parallelism: Multithreading

In this post we introduce the essentials of programming for systems with several CPU cores. We start with an explanation of software threads and synchronization, two fundamental building blocks of multithreaded programming. We explain how these are implemented in hardware, and finally, we present several multithreading APIs you can use for parallel programming.

Vectorization, dependencies and outer loop vectorization: if you can’t beat them, join them

Vectorization, dependencies and outer loop vectorization: if you can’t beat them, join them

As I already mentioned in earlier posts, vectorization is the holy grail of software optimizations: if your hot loop is efficiently vectorized, it is pretty much running at fastest possible speed. So, it is definitely a goal worth pursuing, under two assumptions: (1) that your code has a hardware-friendly memory access pattern1 and (2) that…

When vectorization hits the memory wall: investigating the AVX2 memory gather instruction

When vectorization hits the memory wall: investigating the AVX2 memory gather instruction

For all the engineers who like to tinker with software performance, vectorization is the holy grail: if it vectorizes, this means that it runs faster. Unfortunately, many times this is not the case, and the results of forcing vectorization by any means can mean lower performance. This happens when vectorization hits the memory wall: although…