Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

What are the bot settings and filters? 

...

You can customize filters for Specific continuously tweak the OpenAI Sentiment Analysis Bot until it returns the result for your requirements. return customized and more accurate results. Do this by creating filters for specific Ticket Types, Boards, or StatusStatuses, and others on the Data Filter (2nd) bot block,bot blocks.

DataFilter bot block - This block has specific filters that qualify sentiments for analysis. The default filters for tickets that would qualify to be analyzed for sentiment analysis are as follows:the following: 

No Format
ticketNoteResolutionFlag
ticketStatusName 
ticketBoardName
ticketAction

The DataFilter bot block window below shows how each filter works. 

     Image Modified
       Other possible filters include: UID, Company Name, Owner, Minutes in Progress, Priority, or Ticket Type and Sub TypesThe other available filters are below. You can add and remove custom filters using  using theimage add filter buttonImage Modified and image remove filter buttonImage Modifiedbuttons.

No Format
ticketCwUid, ticketCompanyName, ticketOwner, minutesTicketInProgress, priorityName, and ticketTypeName


        7.  The OpenAI bot block - This block contains the settings and prompt prompts for OpenAI to return sentiment on the ticket summary and . Use the settings below allows to fine-tune the AI model to be fine-tuned.


SettingDefaultDescription
Temperature0.2

The Temperature setting is common to All ChatGPT functions, and is used to fine tune the sampling temperature by a number between 0 and 1. Use 1 for creative applications and 0 for well defined answers.

Example: if you would like to return factual or straight forward answers such as a country's capital, then use 0. For tasks that are not as straight forward such as generating text or content, a higher temperature is required to enable the capture of idiomatic expressions and text nuances.

Max Length120

Represents the maximum number of tokens used to generate prompt results. Tokens can be thought of as pieces of words that the model uses to classify text.

Example:
1 token ~= 4 characters
1 token ~= 3/4 words
100 tokens ~= 75 words

Check this link for more information.

Top P0.2

Top_p sampling also known as nucleus sampling is an alternative to temperature (sampling). Instead of considering all possible tokens, GPT-3 considers only a subset, or a nucleus whose cumulative probability mass adds up to a threshold, the top_p.

Example: If the Top P is set to 0.2, GPt-3 will only consider the tokens that make up the top 20% of the probability mass for the next token, allowing for dynamic vocabulary selection based on context.

Frequency Penalty0

Mostly applicable to text generation, this setting tells the model to limit repeating tokens. Like a friendly reminder to not overuse certain words or phrases. Since this is mostly not applicable to sentiment analysis, it is set to 0.

Presence Penalty0

This parameter tells the model to include a wider variety of tokens in generated text, and like the frequency penalty is applicable to text generation as compared to sentiment.

...