0

I tried Google Bard now recently for the first time after we got into here in Finland too. And to my surprise it gives different answer each time I ask the same question. Is this a feature of LLM or Bard or what takes?

harism
  • 125
  • 7

1 Answers1

4

LLMs generate probability density over the vocabulary of tokens that can be generated, i.e. for a given input text, they output a set of probabilities corresponding to each token in the vocabulary. The higher probability means that the LLM predicts it to be a more reliable next token than the lower probability ones. For example, given the input text, She has a pet, the probability might look like {'dog': 0.4, 'cat': 0.3, 'snake': 0.2, 'crocodile': 0.1} just as an illustration. Now, if you have tried out OpenAI's playground, they offer an interactive slider to play around with the variables, one of which is temperature. The less the temperature, the more greedy the model, i.e. if you set the temperature to 0, the LLM will always choose the token with the highest probability, no matter what. And so, given the temperature is 0, every time you ask the same question, the LLM-generated response will be the same.

Alternatively, increasing the temperature introduces randomness. Now, the model, instead of selecting the token with the highest probability, will choose one of the options from the top n tokens with the highest probabilities; it becomes stochastic. So, for the same given question, the next token will be randomly selected and therefore the ultimate response will be different.

BARD and ChatGPT use stochastic selections to make them more creative and also to appeal to the masses. It makes the LLM feel sentient.

Chinmay
  • 511
  • 13
  • Thank you for the answer, I’m tempted to ask how prompts like ”write me a song about rain” are handled but I rather ask is it LLM which answers such questions at all? – harism Jul 22 '23 at 05:33
  • 1
    The question seems unclear. But, the prompt write me a song about rain is answered exactly the same way. The LLM outputs a list of probabilities, and the selection criteria select the next token, and this repeats until a stop token is reached. So yes, LLMs answer these questions. – Chinmay Jul 22 '23 at 05:37
  • Your example question gives interesting answers by the way and I need to go repeat it a few times once got time. Cheers! – harism Jul 22 '23 at 06:12
  • I set temperature to 1.7 expecting different randomness in responses and I get the same answers with no randomness. Randomness is just not something ChatGPT can do. – AlxVallejo Oct 04 '23 at 13:57