N O T I C E
MSPbots WIKI is moving to a new home at support.mspbots.ai to give you the best experience in browsing our Knowledge Base resources and addressing your concerns. Click here
for more info!
Page History
...
Setting | Default | Description |
---|---|---|
Temperature | 0.2 | The Temperature setting is common to All ChatGPT functions, and is used to fine tune the sampling temperature by a number between 0 and 1. Use 1 for creative applications and 0 for well defined answers. Example: if you would like to return factual or straight forward answers such as a country's capital, then use 0. For tasks that are not as straight forward such as generating text or content, a higher temperature is required to enable the capture of idiomatic expressions and text nuances. |
Max Length | 120 | Represents the maximum number of tokens used to generate prompt results. Tokens can be thought of as pieces of words that the model uses to classify text. |
Top P | 0.2 | Top_p sampling also known as nucleus sampling is an alternative to temperature (sampling). Instead of considering all possible tokens, GPT-3 considers only a subset, or a nucleus whose cumulative probability mass adds up to a threshold, the top_p. |
Frequency Penalty | 0 | Mostly applicable to text generation, this settings setting tells the model to limit repeating tokens. Like a friendly reminder to not overuse certain words or phrases. Since this is mostly not applicable to sentiment analysis, it is set to 0. |
Presence Penalty | 0 | This parameter tells the model to include a wider variety of tokens in generated text, and like the frequency penalty is applicable to text generation as compared to sentiment. |
...