Problem description:

There are two variables, x which is known to be in the range [a, b], and y which is known to be in the range [c, d].

The problem is, compute the tightest range that contains all the possible values that x|y (that's a bitwise OR) could have.

The solution, if you want it, can be found in Hacker's Delight.

Now, the real reason I posted this is that I discovered a more efficient algorithm than the one in HD (it's actually on the HD website as of recently, in the Contributions from Correspondents section), and I'd like to try to prove its correctness formally (I only have a sort of informal justification for it).

Spoiler contains the algorithm I discovered + justification (no proof).

**Spoiler** For * Spoiler*:

Explanation below.

`unsigned minOR(unsigned a, unsigned b, unsigned c, unsigned d)`

{

unsigned settablea = (a ^ b) == 0 ? 0 : 0xFFFFFFFF >> nlz(a ^ b);

unsigned settablec = (c ^ d) == 0 ? 0 : 0xFFFFFFFF >> nlz(c ^ d);

unsigned candidatebitsa = (~a & c) & settablea;

unsigned candidatebitsc = (a & ~c) & settablec;

unsigned candidatebits = candidatebitsa | candidatebitsc;

unsigned target = candidatebits == 0 ? 0 : 0x80000000 >> nlz(candidatebits);

// small optimization compared to ~a & target

unsigned targeta = c & target;

unsigned targetc = a & target;

unsigned newa = a & (targeta == 0 ? -1 : -targeta);

unsigned newc = c & (targetc == 0 ? -1 : -targetc);

// no need to actually set the target bit, it will be 1 in the other bound by construction

return newa | newc;

}

It's based on the algorithm from HD. It uses the same strategy, but takes a shortcut. The way both algorithm work is by increasing a or c in a way that you set a bit that is already set in the other (so setting them does not increase the value of a|c) but in return that lets you reset all the bits that are less significant than that bit, so the value of a|c can do down. Of course you can not increase a beyond b, not c beyond d. It should be possible to prove that it actually results in the lower bound if you pick the highest bit that you can set in that way.

HD's algorithm starts at the top bit and works its way down until it finds the first bit it can set in either a or c.

The algorithm I discovered relies on the fact that a <= b, and therefore the the highest bit k at which a and b differ must be 0 in a and 1 in b. (a | (1 << k)) & -(1 << k) must then be less than or equal to b, because it has the same prefix and then continues with only zeroes.

So the "using this bit won't make a bigger than b"-test is a simple bitmask. That mask can be ANDed with the other condition (let's name the result of that AND the candidate bits), and then the highest set bit is the highest bit that passes both tests. If you do this for both bounds, then the highest set bit in the union of the two sets of candidate bits is the bit that can be set in some bound (you forgot, at that point, which bound - but that's easy to find out again) such that it minimizes a|c.

So.. any tips on how to prove it formally?