But, is that correct? Is it possible a sentence has more than one valid parse tree (e.g. constituency-based)?
The fact that a single sequence of words can be parsed in different ways depending on context (or "grounding") is a common basis of miscommunication, misunderstanding, innuendo and jokes.
One classic NLP-related "joke" (around longer than modern AI and NLP) is:
Time flies like an arrow.
Fruit flies like a banana.
There are actually several valid parse trees for even these simple sentences. Which ones come "naturally" will depend on context - anecdotally I only half got the joke when I was younger, because I did not know there were such things as fruit flies, so I was partly confused by literal (but still validly parsed, and somewhat funny) meaning that all fruit can fly about as well as a banana does.
Analysing these kinds of ambiguous sentences leads to the grounding problem - the fact that without some referent for symbols, a grammar is devoid of meaning, even if you know the rules and can construct valid sequences. For instance, the above joke works partly because the nature of time, when referred in a particular way (singular noun, not as a possession or property of another object), leads to a well-known metaphorical reading of the first sentence.
A statistical ML parser could get both sentences correct through training on many relevant examples (or trivially by including the examples themselves with correct parse trees). This has not solved the grounding problem, but may be of practical use for any machine required to handle natural language input and map it to some task.
I did check a while ago though, and most Parts Of Speech taggers in Pythons NLTK get both sentences wrong - I suspect because resolving sentences like those above and AI "getting language jokes" is not a high priority compared to more practical uses for chatbots/summarisers, etc.