Open AI have made significant changes to their platform, specifically announcing ChatGPT4 Vision. This update will bring new multimodal features that will change the way we interact with artificial intelligence. One of the major innovations is the ability to voice interact with ChatGPT.
Now users will be able to communicate with it using their voice, which will give a more natural character to the communication. Open AI have developed a new text-to-speech model, which is created in collaboration with professional speakers to make ChatGPT's voice sound as realistic as possible. Another interesting feature is the ability to work with images. Now users will not only be able to describe their queries in text, but also show ChatGPT exactly what they are interested in. This is based on the multimodal models of GPT-3.5 and GPT-4. Open AI have also paid attention to the usability of the platform.
Voice and image handling will be available on both iOS and Android, allowing users to interact with ChatGPT via their mobile devices. The Open AI team is also concerned about user safety. Whisper technology is being used to accurately recognize voice, which provides an additional layer of protection. This update is an important step in the development of multimodal artificial intelligence.
The new features not only make interacting with ChatGPT more natural, but also extend its capabilities into our daily lives. The new features will be available for Plus and Enterprise users over the next two weeks. This is great news for everyone using the Open AI platform and looking to better interact with artificial intelligence.
Ailib neural network catalog. All information is taken from public sources.
Advertising and Placement: [email protected] or t.me/fozzepe