Lace said:
What's the difference between a signed and unsigned char?
(stupid question I know, but I gotta learn these things somehow)
A signed char can represent negative numbers and positive numbers, but its maximum value is halved as a result of this. An unsigned char can only represent positive numbers. If you're using the variable to store an ASCII character, the difference is meaningless because it's always treated as an unsigned char. But if you use it to store a number, the difference is meaningful.
S. P. Gardebiter said:
No, 0xFFFF is actually euivalent to -32768.
Most significant bit is for making it negative (If it's a signed value).
No, you're wrong. You're getting confused with sign-magnitude representation, where the most significant bit is the sign and the remaining bits are the number. This is rarely used, because it includes two separate representations of zero: positive zero, and negative zero. Because of this, its minimum value would be -32767, not -32768.
The representation that's normally used is two's complement representation. In that representation, all bits set is -1. The number -32768 has all bits except for the most significant bit set. Basically the way it works is that it counts up to the max value, then starts from the min value and works back up toward zero. So, a three-bit two's complement would be as follows:
Code:
Binary Decimal
000 0
001 1
010 2
011 3
100 -4
101 -3
110 -2
111 -1
Note that it
does have the property that the most significant bit is always 1 for a signed value, but that unsetting the most significant bit is not equivalent to multiplying by -1.