First of all, there exist two main types of random variables: the discrete type and the absolutely continuous type. For example, if you ask someone to give you an integer between $1$ and $10$, there are only $10$ possible values for $X$. This is a discrete random variable.
Now, ask this person to choose a real number between $1$ and $10$: there are an uncountable number of possible values for $X$ in this interval. This is an "absolutely continuous random variable".
Here, you consider $X$ as an absolutely continuous random variable because you consider that every real positive value (acceptable for the weight of a dog) is possible. In the sequel, I will not be fully rigorous, but it can give you an idea why we say that $\mathbb{P}[X=x]=0$. I give you an intuitive explanation and a more "rigorous" one.
Intituitively:
Consider that the acceptable values for the weight of a dog are in $[5,15]$ and that your dog has the same probability to have a particular weight $a$ than any other weight in this interval. The probability that your dog weighs a very particular value is $1$ over the number of reals in $[5,15]$, which is clearly infinite! And "$1$ over $\infty$ is $0$". It doesn't mean it will never happen that your dog weighs a particular weight $a$, but if you pick a random number in $[5,15]$, it will "almost always" not be the actual weight of your dog.
I suppose you know what is an integral. In the absolutely continuous case, a random variable $X$ may admit a density $f_{X}$ and, as you probably know, we have
$$\mathbb{P}[a\le X\le b]=\int_{a}^{b}f_{X}(x)\text{d}x$$
Which denotes the probability that $X$ takes values between $a$ and $b$. But if you take $a=b$, which corresponds to $\mathbb{P}[X=a]$, you have
$$\mathbb{P}[a\le X \le a]=\int_{a}^{a}f_{X}(x)\text{d}x=0=\mathbb{P}[X=a]$$
"Formally":
This part is not really rigorous but it gives you an insight on the idea behind the formalism of probability theory. Be aware that I'm not complete and that you should not rely on this for rigorous things.
The axiomatic of probability theory is based on measure theory. In measure theory, we define some positive functions on particular subsets of a set $\Omega$, which are called measures. They have to fullfill some properties, but I won't talk about them now.
A very particular measure we need here is the so-called Lebesgue measure $\mu$. To stay simple, we will restrict to $\Omega\subset\mathbb{R}$ the measure of an interval $(a,b)\subset\Omega$ is $\mu(a,b)=b-a$ (where $b\geq a$). You can intuitively see that the measure of $[a,a]=\{a\}$ is $0$.
A property often required for a measure is to ask for "a certain additivity", which means that $\mu(A\cup B)=\mu(A)+\mu(B)$ when $A\cap B=\emptyset$. This directly implies $$\mu(a,b)=\mu[a,b)=\mu(a,b]=\mu[a,b]=b-a$$ because $(a,b)\cup\{b\}=(a,b]$ and $(a,b)\cap\{b\}=\emptyset$.
Actually, we generally want $\sigma$-additivity or "countable additivity", which refers to $$\mu(\cup_{i=1}^{\infty}A_{i})=\sum_{i=1}^{\infty}\mu(A_{i})$$ where the $A_{i}$'s are pairwise disjoint. You can't take an uncountable number of $A_{i}$'s!
In the absolutely continuous case, like yours, the probability measure comes from this Lebesgue measure in a way I won't explain here. While this Lebesgue measure is not bounded (think about $\mu[0,\infty)$), we consider the probability measure as bounded and, more precisely, we consider that $\mu(\Omega)=1$ where $\Omega$ is the subset of all possible values for your random variable.
Moreover, the probability measure (in the absolutely continuous case) $\mathbb{P}$ is said dominated by $\mu$, which means that, whenever $\mu(A)=0$ for some set $A$, it implies $\mathbb{P}(A)=0$. Here, your event $A$ is $X=x$, and it corresponds to $\{x\}$ and we have previously seen that $\mu(\{x\})=0$, so that $\mathbb{P}[X=x]=0$.