Have you ever wanted to bring a human-like conversation to your applications and websites? With the ChatGPT API, you can now do that. It’s an easy and powerful way to incorporate AI-based conversations into your product or service. But what exactly is the ChatGPT API, how does it work, and how can it be used? Read on to find out.
What Is ChatGPT API?
ChatGPT API is an open-source dialogue engine that uses a transformer-based natural language processing model created by OpenAI. It opens ChatGPT and Whisper API. The former allows any enterprise to use ChatGPT functions in its applications, websites, products, and services, and the latter can realize speech-to-text. The model was trained on over 8 million conversational exchanges from Reddit data which allows it to generate human-like responses when prompted with a question or statement. The ChatGPT API takes this model and wraps it in a RESTful interface so developers can easily access it and use it in their projects. This means developers no longer have to spend time building their own custom models – they can just plug in ChatGPT and get up and running quickly.
What is ChatGPT API used for?
Simply put, it allows any business or individual to use the ChatGPT function in their own applications, websites, products, and services, and it is the latest training model, and the price of $0.002/1K tokens seems very attractive!
The opening of the ChatGPT API is similar to the Apple App Store established by Jobs at that time, allowing enterprises and individual developers all over the world to reserve artificial intelligence interfaces in applications and connect with ChatGPT-like platforms.
How to Apply ChatGPT API？
This article assumes that you have already applied for an account on the OpenAI official website. If you have not, please refer to our beginner Guide to register one！
After applying for an account, log in to the OpenAI website. Click Account Information in the upper right corner –> Billing, set the payment method, and bind the card number information you just applied here to recharge the ChatGPT API.
How to use ChatGPT API？
After the OpenAI account registration is complete, the login interface is shown below. Click View API keys—> Create a new secret key in the upper right corner to generate your own API. This API can be used to develop applications based on ChatGPT API.
It should be noted that for security reasons, this API is only displayed once, please be sure to copy it to your other places before closing the dialog box.
In addition, click Usage on the left column, and you can easily and clearly see the usage of the token, and the data is updated every 5 minutes. OpenAI officially provides a free token usage quota of $18 for each newly registered account. Thinking about it this way, if you spend less than $1, you can have a default token usage limit of $18, which is not a loss. However, it should be noted that the free quota has a time limit, and the quota will become invalid after it expires.
The usage method of ChatGPT API is also very simple. First, you can test it with a simple curl command. Just replace $OPENAI_API_KEY with your own API KEY.
In addition, the official also provides simple implementation codes based on various programming languages. For example, in Python, you only need to import the Openai package, bring the API you just applied for, and choose the latest open model— gpt-3.5-turbo as the model.
For more examples, please refer to the official examples and official API documentation.
However, as developers, we also have the flexibility to choose a specific build version to enjoy the latest features. For example, GPT-3.5-turbo-0301 released today, this interim version will be supported for at least 3 months.
Several problems in the use of ChatGPT API
1. GPT-3.5-turbo model
GPT-3.5 is the most powerful text generation model provided by OpenAI so far through the API. The codename “turbo” refers to an optimized version of GPT-3.5 that is more responsive. OpenAI also calls GPT “the best model for many non-chat use cases.” According to the plan, GPT-4 will be released this year. The GPT-3.5-turbo model claims to cost only one-tenth of Da Vinci text-DaVinci-003. And it will always be updated, such as GPT-3.5-turbo-0301.
2. What is the speed of ChatGPT API?
Personally speaking, I feel that this token is used very quickly! It is faster than the official ChatGPT, and the response does not need to pop out word by word.
3. ChatGPT API Token charging standard
The official charging standard this time is $0.002/1K tokens, about 750 words. It said that’s 10x cheaper than existing GPT-3.5 models.”Although 1k tokens seem like a lot. But in fact, sending a text for the API response may cost a lot of tokens.
According to my observation, basically, asking a question will cost more than 100 tokens, which is quite a lot. Especially in continuous sessions. In order to maintain the continuity of the dialogue, historical messages must be returned every time. In this way, pay-as-you-go is actually not cheap.
In the English language, a token usually corresponds to about 4 characters.
It may be more intuitive to give an official example. According to the official OpenAI documentation, the group of words “ChatGPT is great!” requires six tokens. Its API decomposes it into “Chat”, “G”, “PT”, “is”, “great”, and “!”.
If you want to query how many tokens (money ) a string of specified text will cost, the official also provides a free query calculator:
4. Continuous session capability of ChatGPT API
We know that the strongest part of the official ChatGPT and ChatGPT Plus is that it has contextual connections. It is the ability to have a continuous conversation with you. This will give you the feeling of talking to a real-life partner, mentor, or friend.
In addition, this API has a continuous dialogue capability. You only need to assign the historical dialogue content to the Message parameter at the place where the interface is entered as a parameter. Then pass it back to the model. However, this will inevitably increase the cost of the token, which is not cost-effective. The official seems to be aware of this, so the upper limit of the token for a single pass is 4096. If it exceeds 4096, it will return an error.
5. Regional restrictions on the use of ChatGPT API
Are there any regional restrictions on the use of ChatGPT API? OpenAI’s application programming interface (API) has been opened to 161 countries and regions around the world, of which the Asian region includes Japan, South Korea, Bangladesh, etc., but does not include mainland China and Hong Kong, China.
6. Whisper API
In addition to the ChatGPT API, OpenAI also announced the Whisper API. Whisper is an OpenAI open-source speech-to-text model released on September 2022. It has been praised and supported by the global developer community. Based on the open-source whisper-large-v2 model, the Whisper API is easy for developers to use based on their demand. The price is $0.006 per minute.
According to the official documentation, the Whisper API can be used to transcribe (transcribe in the source language) or translate (transcribe into English). And it also support to transfer into various formats (M4A, MP3, MP4, MPEG, MPGA, WAV, WEBM)
The official also provides Whisper usage examples:
Is it necessary to renew ChatGPT Plus?
The last question is, after the release of ChatGPT API, some people will wonder whether ChatGPT Plus is worth to be renewed. Actually not. As mentioned above, the use of ChatGPT API is biased towards programmers and application developers, and there is a relatively large threshold for ordinary users. Also, a Token seems to be cheap, but in fact, it needs to send back historical messages to continue replying. What’s more, there is currently a limit of 4096 tokens, and if it is actually used, it won’t be cheaper than the $20 price of Monthly ChatGPT Plus.
The speed of the API will be faster than that of Plus. However, the quality of the answers between the two is definitely dominated by the official Plus. After all, apart from the model, the parameters of the model are decisive. In a word, you can always trust the official.
2022 can be described as a year of great attention for AI. With ChatGPT and Stable Diffusion detonating the world with amazing functionality, in 2023 a series of AI applications will be integrated into our daily life with explosive growth. This also implies that we should work harder to develop our technology