Two weeks ago, I came across an interesting bug. The convert() function
below returns 0x80000001 when p points to 0x01,
0x00, 0x00, 0x80, but the expected return value is
0x00000001 instead.
int32_t convert(const uint8_t *restrict p) {
uint32_t x = ( p[0] +
256 * p[1] +
256 * 256 * p[2] +
256 * 256 …




