An improved version of the GPT-4 language model, called GPT-4 Turbo, was introduced at the first OpenAI Developer Conference. The new model has more power and at the same time has a more affordable price compared to GPT-4.
GPT-4 Turbo was trained on data collected in April 2023, while GPT-4 used data dated September 2021. The GPT-4 Turbo neural network was trained on multiple examples from the internet to predict the likelihood of words based on the context of the surrounding text. For example, if a sentence ends with the phrase "I look forward to...", GPT-4 Turbo can predict a continuation of the phrase, such as "...your response".
GPT-4 Turbo will be available in two versions: one for text-only analysis and one capable of understanding context and images. The version for text analysis is already available as a preview version via API, and both versions of the model will be generally available in the coming weeks.
An important change is the increase of the context window to 128 thousand tokens, which is four times larger than in GPT-4. This is equivalent to 300 pages of text in a single invocation, allowing the language model to better understand the meaning of queries and provide the most appropriate and thoughtful answers without deviating from the topic. The context window size of 300 pages of text or approximately 100,000 words is the same size as Emily Bronte's Wuthering Heights or J.K. Rowling's Harry Potter and the Prisoner of Azkaban. This is the largest context window of any commercially available AI model and surpasses the context window of Anthropic's Claude 2 model, which supports up to 100k tokens.
Performance optimization of the GPT-4 Turbo model has been announced on the OpenAI blog. Users can now take advantage of this model for a price that is three times lower for input tokens and two times lower for output tokens compared to the GPT-4 model.
OpenAI reports that using the GPT-4 Turbo model now costs $0.01 per 1000 input tokens (approximately 750 words) and $0.03 per 1000 output tokens. This means that the new version of GPT-4 is now three times cheaper than the previous model. The cost of using GPT-4 Turbo for image processing will depend on the size of the image. For example, processing a 1080 × 1080 pixel image in GPT-4 Turbo will cost $0.00765.
GPT-4 Turbo is a language model that can convert text to speech and process queries using images. It also integrates with DALL-E 3. Improvements in GPT-4 Turbo allow the model to perform more complex tasks within a single query. Users can specify an encoding language, such as XML or JSON, to produce results.
OpenAI also promises copyright protection for corporate users through its Copyright Shield program, with similar solutions already adopted by Google and Microsoft for their AI models.
Ailib neural network catalog. All information is taken from public sources.
Advertising and Placement: [email protected] or t.me/fozzepe