Edit Content

-
-
OpenAI held its first developer conference

OpenAI held its first developer conference

OpenAI_has_hosted_its_first_conference_for_developers

OpenAI, the developer of ChatGPT, held its first developer conference where it unveiled some big news. One of the most interesting announcements was the ability to create custom bots, which will allow ChatGPT to be customized for a variety of tasks, from advising on improving startups to writing recipes. Bot creation will be done on the OpenAI platform with no programming required, simply by interacting with ChatGPT. Users will be able to give bots initial instructions and additional task-specific knowledge.

Custom bots will be able to search for information on the Internet, generate images, analyze data and interact with third-party services such as Canva and Zapier. In the future, it will be possible to monetize the bots created. The end of November will see the launch of the OpenAI online store, where users will be able to sell access to their bots.

The company has also updated the GPT-4 model by training it on fresh data. The model now has knowledge of events up to April this year, whereas previously it was limited to September 2021. The new model is faster, more accessible, and its ability to process text in a single query has increased to 128,000 tokens, roughly equivalent to 300 pages of text.

In addition, OpenAI introduced several APIs for developers, including an API for working with the new speech synthesis model. A demo version of the Hugging Face algorithm is available at link. It allows users to write their text, select one of the available voices and get an audio recording of the synthesized speech.

More in the category

OpenAI GPT-4.5 System Card
Translation of the full GPT-4.5 system report into Russian and its conclusions. The development of language models does not stand still:...
sam altman
OpenAI, a leader in artificial intelligence, is once again surprising with innovative plans. In this article, we will cover the latest roadmap update,...
o3 mini
OpenAI officially launches the new o3-mini artificial intelligence model, which will be available today.
Goodbye 3.5! OpenAI introduces GPT-4o mini model
OpenAI has unveiled its latest artificial intelligence model, the GPT-4o mini, which will be the replacement for the GPT-3.5. This model promises to significantly improve the quality of...
OpenAI announced CriticGPT_ a new model to improve the accuracy of GPT-4
OpenAI has developed CriticGPT, based on GPT-4, to help human trainers validate program code generated by ChatGPT. The model analyzes the code and indicates...
openai_prezentovala_novuyu_model_gpt_5_na_konferenczii_microsoft
At the recent Microsoft conference, OpenAI CEO, Sam Altman, unveiled the long-awaited GPT-5. This event was an important milestone in the...
OpenAI's newest free model is GPT-4o
OpenAI is releasing a new flagship generative AI model called GPT-4o, which will be "iteratively" deployed in the company's products for developers and...
The musical debut of "Sora"_ her music video has become a major topic of discussion online
Fantastic images, animated art and the magic of sound - all this is embodied in a new video clip created with the help of OpenAI neural network for...
OpenAI_announced_the_release_of_GPT_4_Turbo,_an artificial_intelligence model
An improved version of the GPT-4 language model, called GPT-4 Turbo, was introduced at the first OpenAI developer conference. The new model has more...
The new_model_of_text_sound_from_OpenAI_can_be_tried_for_free
OpenAI has introduced several APIs for developers to use the new speech synthesis model in their projects. One of them is.
ChatGPT_can_now_create_images_OpenAI_announced_new
OpenAI happily announced the release of an update to its generative artificial intelligence system, ChatGPT. On its official blog, the company shared the news of...
Open AI announced - ChatGPT4 Vision
Open AI have made significant changes to their platform, specifically announcing ChatGPT4 Vision. This update will bring new multimodal features that...