Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SettingDefaultDescription
Temperature0.2

The Temperature Common setting is common to All ChatGPT functions, it and is used to fine tune the sampling temperature by a number between 0 and 1. Use 1 for creative applications and 0 for well defined answers.

Example: if you would like to return factual or straight forward answers such as a country's capital, then use 0. For tasks that are not as straight forward such as generating text or content, a higher temperature is required to enable the capture of idiomatic expressions and text nuances.

Max Length120

Represents the maximum number of tokens used to generate prompt results. Tokens can be thought of as pieces of words that the model uses to classify text.

Example:
1 token ~= 4 characters
1 token ~= 3/4 words
100 tokens ~= 75 words

Check this link for more information.

Top P0.2

Top_p sampling also known as nucleus sampling is an alternative to temperature (sampling). Instead of considering all possible tokens, GPT-3 considers only a subset, or a nucleus whose cumulative probability mass adds up to a threshold, the top_p.

Example: If the Top P is set to 0.2, GPt-3 will only consider the tokens that make up the top 20% of the probability mass for the next token, allowing for dynamic vocabulary selection based on context.

Frequency Penalty0

Mostly applicable to text generation, this settings tells the model to limit repeating tokens. Like a friendly reminder to not overuse certain words or phrases. Since this is mostly not applicable to sentiment analysis, it is set to 0.

Presence Penalty0

This parameter tells the model to include a wider variety of tokens in generated text, and like the frequency penalty is applicable to text generation as compared to sentiment.

...