🌐
Assort Docs
assort.appJapanese (日本語)
  • What is Assort?
  • Getting started
  • FAQ
  • Roadmap
  • Features
    • 🤖AI (ChatGPT)
    • 📝Poll
    • 🕘Time Tracker
    • 💬In-house Q&A
    • 📚Glossary
  • Support
    • Slack Connect Support
  • Terms and policies
    • Terms of Use
    • Privacy Policy
    • Data Policy
    • Disclaimer
    • Specified Commercial Transaction Information
Powered by GitBook
On this page
  • Learn with short videos
  • How to use
  • AI Models
  • Tokens
  1. Features

AI (ChatGPT)

You can interact with ChatGPT on Slack.

PreviousFAQNextPoll

Last updated 1 year ago

Learn with short videos

How to use

Simply type @Assort [your message] in the channel you want to use and send it.

AI Models

Assort's AI chat feature utilizes the OpenAI API and currently supports three models provided by the OpenAI API.

  • gpt-3.5-turbo This model provides the fastest responses and is the most cost-effective.

  • gpt-4 Responses are slightly slower compared to gpt-3.5-turbo, but it delivers more accurate answers. It also has superior capabilities in handling longer texts.

  • gpt-4-turbo-preview The most advanced model, although being a preview version, it might exhibit some instability. In the future, gpt-4-turbo-preview is expected to be officially released as gpt-4-turbo, at which point gpt-4 will be discontinued and integrated into gpt-4-turbo.

Tokens

Using the AI chat feature consumes tokens, which are uniquely set by Assort. The consumption of tokens varies by model.

  • gpt-4 The token consumption is about 45 times higher than that of gpt-3.5-turbo.

  • gpt-4-turbo-preview The token consumption is about 20 times higher than that of gpt-3.5-turbo.

The Pro plan includes 5,000,000 tokens, and additional charges will be incurred if this limit is exceeded. The additional charge is 200 JPY per 1,000,000 tokens. It is anticipated that additional charges will rarely occur under normal usage.

gpt-3.5-turbo The calculation method follows closely to OpenAI's official approach, with the addition of Assort's default prompt. This results in a slight increase in actual token consumption. Furthermore, when continuing a conversation in a thread, the content of the previous dialogue (user-mentioned posts and AI response posts) is utilized as parameters when making calls to the OpenAI API. As a result, the token consumption increases. Please refer to OpenAI's official tools for the calculation method of tokens.

🤖
OpenAI Platform