When checking for validity of density matrix in qiskit, it asserts that the norm of the density matrix is 1. why it is the case? see here
-
related: https://quantumcomputing.stackexchange.com/q/5589/55 – glS Mar 08 '22 at 21:38
2 Answers
In the code link you've provided, it's the state vector which is being referred to and not the density matrix. The condition that the (2-)norm of a state vector should be one simply means that the probabilities of each orthogonal basis must sum to one. For an operator to be a density matrix, it needs to be positive semi-definite with trace 1.

- 44
- 3
-
Thank you for your response but it also checks the density matrix in 'def _format_state' https://qiskit.org/documentation/_modules/qiskit/quantum_info/states/utils.html – Farhad Mar 09 '22 at 00:36
-
1Right. So what is happening there is that it is first checking the dimensions of the ndarray. If one of the dimensions is 1, then it is converting it to a Statevector, and if both the dimensions are same, then it is converting it to a DensityMatrix by using
state = DensityMatrix(state)
. After this, thestate.is_valid()
is called which first checks what type of state it is. For density matrices, it checks the trace == 1. Look atis_valid
definition in https://qiskit.org/documentation/_modules/qiskit/quantum_info/states/densitymatrix.html#DensityMatrix – Gaurav Saxena Mar 10 '22 at 01:12
There are two interesting comments to make upon this question (as well as to the question linked). One is related to the definition of states of a system in quantum theory and the other is related to the normalization condition. Might be a little bit unrelated to the specifics of the question, but it is still lacking as comments in the given answers so far.
Definition of a state: It is interesting that in a general setting states may not be defined as Density Matrices but as linear functionals. This is more in the spirit of taking the Heisenberg picture to it's limits, and as such historically pursued by researchers of an Algebraic Approach. Observables in standard quantum theory are defined to be linear (possibly unbounded) self-adjoint operators acting on a Hilbert space $\mathcal{H}$, but let's suppose they are bounded and write $\mathcal{B}(\mathcal{H})^{\mathbb{R}}$ for simplicity where $\mathbb{R}$ accounts for the self-adjoint section of $\mathcal{B}(\mathcal{H})$. Quantum states may be defined as linear functionals $\rho: \mathcal{B}(\mathcal{H}) \to \mathbb{C}$. that are both positive and normalized. Physically, we ask for positivity and normalization specifically because when acted on self-adjoint operators (observables) we want as a return the probabilities of witnessing values of those observables. This definition brings clear many interesting generalizations, for instance one can show that states separete elements in $\mathcal{B}(\mathcal{H})$; or also one can see that there is a clear generalization for the general case where the observables are $C^*$-subalgebras instead of the entire set of self-adjoint operators. So an interesting fact corresponds to the following theorem:
Theorem: Let $D \in \mathcal{B}(\mathcal{H})$ with $D$ positive semidefinite with $\text{Tr}(D) = 1$ and $\dim(\mathcal{H}) < \infty$. Then there is a bijection between states $\rho : \mathcal{B}(\mathcal{H}) \to \mathbb{C}$ and density matrices $D$. Namely, for every $\rho$ there exists a unique matrix $D$ such that $\rho_D(\,\cdot\,) = \text{Tr}(D \,\,\cdot\,\,)$.
So one can think of the definition of states as density matrices as a mathematical property of working with bounded operators over finite-dimensional Hilbert spaces.
Normalization: Another very interesting point to make is that even if normalization is something wanted in principle, very often in experimental implementations normalization is not perfectly observed. Hence there is a well-defined notion of unnormalized states $\rho \mapsto \vec{\omega} \in \mathbb{R}^N$ in general probabilistic theories, with $N$ the GPT dimension, where from an operational perspective, there is the space of normalized states in the GPT (and quantum theory can be viewed as a GPT), but it is often interesting to represent subnormalized (or even unnormalized) states. As an example, suppose that there is great photon loss in detection of a quantum optics experiment, and one is mapping quantum states as probability distributions of detections given that some photons were prepared. The photon loss will necessarily lead to a subnormalized state in the GPT sense because we can think of the GPT states described by the vector $\vec{0}$ which represents 'no photon' and the effective preparation as a convex combination of the normalized state that should ideally be implemented $\vec{\omega}$, hence $\vec{\omega}_{\text{effective}} = \alpha \vec{0} + (1-\alpha)\vec{\omega}$ with $\alpha$ the photon loss rate.

- 2,337
- 6
- 25