InnocentZero's Treasure Chest

HomeFeedAbout MeList of interesting people

24 Oct 2025

Branch predictions

Based off of this talk by Fedor Pikus.

branch predictors

We can usually fit many operations in one clock cycle. So if all instructions were independent, we can in theory do like 5 or 6 instructions at once.

There are 2 types of dependencies in software: data dependency and conditional dependency.

Data dependency happens when you need the result of the previous computation in the current one. Solution: pipelining. Interleave the instructions for multiple iterations and it helps a lot. However, it might lead to Data Hazards

However, this is all fun and games until the branch condition isn't satisfied. Those lead to Control Hazards. To prevent this, CPU have branch predictors.

Mispredicting

If the branch predictor is wrong, several things need to be done:

  • All predicted stuff needs to be discarded/aborted. That means the pipeline needs to be flushed.
  • New computations need to be started.
  • Any results that were computed need to be undone. If it cannot be undone, it cannot be done speculatively.

Optimization

Don't optimize unless you're sure that it'll mot mispredict.

Example 1:

if (x || y)
    do();
else
  do_else();

What we see:

  • Either of them is true then proceed.

What compiler sees:

  • If x is true do it.
  • If x is false:
    • if y is true do it.

This is important because of short circuiting. The compiler will always see it as two branches because it doesn't have a choice.

What doesn't work:

  • Temporary variable; the compiler will just remove it in most cases.

What does work:

  • Change it to bool(x) + bool(y) or bool(x) | bool(y).
    • Works great while the compiler doesn't optimize it to logical or.
    • Compilers usually do it only for one of those two.

However, there are pitfalls:

  • This does more computation so you need to be sure that the overhead is not high.
  • Also need to make sure the add unit should be free around that time (profiling).
  • || does less work in general.

Example 2:

sum += cond ? expr1 : expr2;}

Branchless implementation:

term[2] = { expr2, expr1 };
sum += term[bool(cond)];

Improves performance if:

  • extra computation is small.
  • branch is poorly predicted

Though example 2 is very well optimized by pretty much every compiler out in existence (atleast ones that matter).

However, NEVER do number 2 with functions. If function(s) are inlined, then you fail spectacularly.

Structure of branch predictors

It has two buffers, a direction predictor and a Branch Target Buffer.

The branch target buffer is a cache that has entries in the format pc:jump for every single pc value possible. That is \(2^{32}\) possible pc values. In the first invocation it actually stalls, calculates and loads the address. In subsequent invocations it just reads it off from there.

The direction predictor is also a table of sorts that has one of those counters for every single pc. Each time a branch is encountered, you fetch the counter value and predict accordingly. And later on update the result in the counter.

branch_predictor_1bit.png
branch_predictor_2bit.png

This is good for single branches like loops and if-else. However in case of nested branches performance can be improved by using something called as a GShare.

A global share history buffer register is a shift register that stores the values of the previous 32 branches. When we want to choose the index for the direction predictor, we first XOR pc with the gshare value and then use it. The branch target buffer indexing remains the same.

When we want to update the counter, we update both the counter and the gshare. We update the original counter, and then push back the bit in gshare.


Other posts
Creative Commons License
This website by innocentzer0 is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.