0

Sometimes, when I see answers like this from large language models, it makes me feel disgusted:

Me: Does Voyager 1 have enough velocity to escape the solar system without using Jupiter's gravity assist?

claude: "

  • Escape velocity at 1 AU from the Sun is 151,560 km/h.
  • Voyager 1's pre-Jupiter flyby velocity was 169,200 km/h.

Therefore, Voyager 1 absolutely did NOT have enough velocity at 1 AU to escape the solar system without the Jupiter gravity assist. Its velocity of 169,200 km/h at 1 AU was below the solar escape velocity of 151,560 km/h."


stupid answer by claude 2

So my question is: why can this large language model sometimes solve complex problems but fail to answer basic ones? (The irony is that it became not free before it was ready)

Mr Saw
  • 1
  • 1
  • Your post just reads like commentary, I don't see an actual question. What is your question? – Dennis Soemers Oct 06 '23 at 13:24
  • @DennisSoemers I mean, why can this large language model sometimes solve complex problems but fail to answer basic ones? (The irony is that it became not free before it was ready) – Mr Saw Oct 06 '23 at 13:31
  • Please edit your title and body of the post, such that the title actually reads like a question and the body of the post also makes it clear and explicit what the question that you're wondering about is. – Dennis Soemers Oct 06 '23 at 13:37
  • @DennisSoemers ok, It's done. But if you have any answer on my question please help, I'm confused?! – Mr Saw Oct 06 '23 at 13:44
  • 1
    My short answer would be that LLMs can never be relied upon to generate factually correct text. That's not what they are explicitly trained for. Especially for things like math they can't be relied upon, because they don't have any mechanisms for symbolic manipulation. They view everything as strings of tokens, and have been trained to produce sequences of tokens that are statistically likely, given the behaviour of humanity all over the internet. They have 0 understanding of text's meaning. But LLMs are not my main area of expertise, probably someone else should write a proper, full answer. – Dennis Soemers Oct 06 '23 at 13:55
  • The answer is very simple, the training set contains some of the "complex problems" but very little of the simple ones, the LLM is just reproducing its training set (which is not public). – Dr. Snoopy Oct 06 '23 at 16:32
  • Hi! Welcome to AI SE. This question has been asked in similar versions about other LLMs, and the reason is all the same. Please see those other questions to get some more responses. – Robin van Hoorn Oct 07 '23 at 12:37

0 Answers0