I decided to elaborate on my comment just to address the specific example of $f:\Bbb{R}\to\Bbb{R}$, given by $f(p)=p^2$. Let us use $x:\Bbb{R}\to\Bbb{R}$ as the identity function. Then, one can write
\begin{align}
df&=2x\,dx
\end{align}
This is a super condensed notation. So, the first thing we must do is plug in a point $p\in\Bbb{R}$ for where we're calculating the differential. So, we have
\begin{align}
df_p&=2x(p)\,dx_p= 2p\,dx_p
\end{align}
Now, notice what everything is, $df_p:\Bbb{R}\to\Bbb{R}$ is a linear transformation. On the right, $2p\in\Bbb{R}$ is a number, $dx_p:\Bbb{R}\to\Bbb{R}$ is a linear transformation, so on the right we also have a linear transformation. Linear transformations must eat vectors in their domain to spit out vectors in their target space. In our situation, everything is close to trivial since $\Bbb{R}$ is a 1-dimensional space. So, for any $h\in\Bbb{R}$, we have
\begin{align}
df_p(h)&=\bigg(2p\,dx_p\bigg)(h)=2p\cdot dx_p(h)=2p\cdot d(\text{id}_{\Bbb{R}})_p(h)=2p\cdot\text{id}_{\Bbb{R}}(h)=2p\cdot h
\end{align}
i.e $df_p(h)=2ph$. This is the full effect of the differential when we plug in everything.
Also, you asked
But it can't be saying $dx_p=[1]$, can it?
No, that's not what it says. As I've emphasized many times now, $dx_p=d(\text{id}_{\Bbb{R}})_p=\text{id}_{\Bbb{R}}$ is a linear transformation $\Bbb{R}\to\Bbb{R}$. But from linear algebra I hope you're comfortable with the idea that once we specify a basis for the domain and target space, we can associate to any linear transformation, a certain matrix. In this case, we choose the basis $\beta_1=\{1\}$ for the domain $\Bbb{R}$, and also the basis $\beta_2=\{1\}$ for the target space $\Bbb{R}$. Then, the matrix representation (which I denote by square brackets around the linear transformation) is
\begin{align}
[dx_p]_{\beta_1}^{\beta_2}&=\begin{pmatrix} 1\end{pmatrix}
\end{align}
This is simply saying that the matrix representation of the identity linear transformation (with respect to the same basis on the domain and target) is the $1\times 1$ identity matrix. Hopefully this is obvious from linear algebra.
Finally, I want to emphasize that saying for all $p$, $dx_p=\text{id}_{\Bbb{R}}$ or that for all $p$, $[dx_p]_{\beta_1}^{\beta_2}=\begin{pmatrix} 1\end{pmatrix}$ is entirely equivalent to the statement that for all $p$, $x'(p)=1$.
Notice that in general one ends up (simply by unwinding definitions without any handwaving) that $df=f'\,dx$. This means $df_p(h)=f'(p)\cdot dx_p(h)=f'(p)\cdot h$.
So, as you can see, in one-dimension, the object $df$ doesn't tell us a whole lot; we're just multiplying by $f'$ at the appropriate point. The reason this was so trivial in one dimension is because linear algebra in one-dimension is trivial, not because the idea of the differential is trivial.