2

I proposed this method of converting a two's complement binary number to decimal to my professor and he said it was wrong. Some older guy in the class just shook his head at me and gave me a condescending stare. I'm willing to admit I'm wrong when I'm wrong but I can't find a case where it doesn't work.

Example

Take a two's complement binary representation of -38

1 0 1 1 0 1 0

The method I was taught is, starting at the right, keep the bits until you encounter a 1. Then flip all bits after that one. Which gives

0 1 0 0 1 1 0 = 38

The method I thought was easier since it doesn't involve any binary manipulations is treat the most significant bit as having a minus sign. So you would simply calculate the answer as

1*(-64) + 0*(32) + 1*(16) + 1*(8) + 0*(4) + 1*(2) + 0*1 = -38

What is wrong with my method?

dgreenheck
  • 165
  • 3
  • 2
    Your method is fine but it accomplishes a different task. The algorithm you were taught negates a number, input and output being in binary. Your algorithm gives a formula for the value of a two's complement binary. – Yuval Filmus Sep 15 '15 at 15:51

1 Answers1

2

No error. in fact, if you add the binary representation of (-38) and 38 as unsigned 7-bit integer without overflowing, you'll get $10000000_2$, then due to overflow the lowest 7 bits is preserved, resulting 0. This property holds for all $0<x<2^6$, ie. denoting $u(x)$ the unsigned value from 2's component representation of $x$, then $$u(-x) + x = 2^7, (x>0)$$ thus $$\begin{align}-x =& u(-x)-2^7\\=&(u(-x)-2^6)+(-2^6)\end{align}$$ ie. subtract the highest bit, and add back the minus bit. This can be extended from 7-bit to n-bit.

Terence Hang
  • 789
  • 4
  • 7