The ASN.1 DER format is deterministic; i.e. there is only a single sequence of bytes that validly encodes given $r$ and $s$ values. Mind the details, though: the encodings of $r$ and $s$ are minimal-sized signed big-endian. Since $r$ and $s$ are positive values, this means that the top bit of the first byte of each encoding must be zero. In your example, the top byte of $r$ is 0xB2, whose top bit is a 1, so an extra 0x00 byte is added, while the top byte of $s$ is 0x22, whose top bit is a 0, so no extra 0x00 byte here.
However, there are some buggy implementations that produce wrong encodings in some cases (e.g. not including a leading 0x00 byte where necessary, or adding an unnecessary leading 0x00 byte). Thus, ECDSA signature verifiers are often a bit lenient in what they accept.
The ECDSA standards (ANSI X9.62, FIPS 186-4) don't define an ECDSA signature as a sequence of bytes, but as a pair of values $(r,s)$. Encoding of signatures is considered to be out of scope; the protocol that uses ECDSA signatures is responsible for defining which encoding will be used. Different protocols used different conventions. In practice, you will encounter two main encodings for ECDSA signatures:
The ASN.1 DER format, described above. It's the one used in X.509 certificates and related protocols.
The "raw" format in which $r$ and $s$ are merely concatenated. In that format, $r$ and $s$ must first be represented as sequences of bytes with some convention (usually big-endian), possibly with some extra padding bytes (of value 0x00) so that both $r$ and $s$ encodings have the same size. The "same size" requirement is important because verifiers must know where to split. OpenPGP uses the "raw" format.