If x = 0.999... (a) -> 10x=9.999... -> 10x-x=(9.999...)-(0.999...) -> 9x = 9 -> x = 9/9 = 1 (b)
x = 0.999 (a) = 1 (b) ?
So... what is the right explanation for this occur? I know that exists a tiny error somewhere... I think that the subtraction of the repeating decimal is incorrect, because the principle of infinity. That's it? Am I right?