I assume that $|x|$ in the exercise denotes the length of the bitstring $x$.
If so, the answer is simple: a polynomial time algorithm needs to complete in a number of steps bounded by a polynomial function of the length of its input.
How long is $|x|$, represented as a bitstring? Well, it takes $k$ bits to represent a number between $2^{k-1}$ and $2^k-1$, so the length of the input $n = |x|$ is about $\log_2 n$ bits.
Thus, in order for $A$ to be a polynomial-time algorithm, $A(|x|)$ must complete in at most $p(\log_2 |x|)$ steps, where $p$ is some polynomial function. But writing out a string containing $|x|$ bits obviously takes at least $|x| = 2^{\log_2 |x|}$ steps, which is not bounded above by any polynomial function of $\log_2 |x|$.
Essentially, the problem is that the input $|x|$ given to $A$ is "too short" compared to the expected output $x$. The way we "fix" this issue is simply by giving $A$ an extra input $1^n$ whose length matches the length of the expected output. That way, by "padding" the input of $A$ to be at least as long as what its output should be, we effectively allow the running time of $A$ to be a polynomial function of the length of both its input and its (expected) output, while still adhering to the standard definition of a polynomial-time algorithm.
In programming, we'd call this a hack, or perhaps a kluge. But it's a convenient kluge, as it saves us from having to redefine "polynomial time" in crypto differently from the way it's usually defined in complexity theory, and so avoids facilitates communication between these two related fields.
Alas, the cost of this notational uniformity is that the extra $1^n$ argument sometimes appears confusing to new students in crypto — especially as some authors, perhaps out of some misplaced sense of professional embarrassment, seem rather reluctant to explain that it is, indeed, essentially just a notational hack to make the standard definition of "polynomial time" fit what we need in crypto.