As part of Google's annual I/O developer event, the head of DeepMind's artificial intelligence division, Demis Hassabis, presented a first look at what is internally referred to as a universal intelligent assistant. The project, known by the codename Project Astra, is a multimodal AI assistant that functions in real time. This system is capable of "seeing" and interpreting the world around it, recognizing objects and helping users in their daily tasks.
"This idea has been maturing in my mind for a long time. We are creating a universal assistant. It is multimodal and with you all the time... This assistant will be extremely useful. You will get used to its constant presence when you need it," Hassabis shares during the demonstration.
Google has also released a short video that shows the capabilities of Project Astra in its early stages of development. In the video, an employee at Google's London office activates the AI assistant, asking it to mark objects capable of making sounds. While rotating her smartphone, she fixes her attention on a speaker on her desk, and the AI responds instantly. She then asks for a description of the colored crayons in the glass, and the AI seamlessly responds that they can be used to create "vibrant works of art." After pointing the camera at a piece of code on the monitor, it asks what exactly that piece of code does, and Project Astra provides an accurate answer without delay. In addition, the AI assistant determines the location of Google's office by looking out the window and successfully performs a number of other tasks. The AI performed all operations in real time, leaving behind a deep impression.
Google's Project Astra represents a significant step in the development of artificial intelligence interfaces, bringing us closer to the idea of a true real-time AI assistant. The use of a powerful neural network like Gemini 1.5 Pro allows for a high level of understanding and interaction with the user, making AI capable of performing more complex and diverse tasks.
Ensuring the speed of query processing and reducing response latency are critical aspects of creating a smooth and natural human-machine interaction. Optimizing the algorithm and related infrastructure that developers have been working on for the past six months helps achieve the required level of performance.
The prospect of using Project Astra in a variety of devices, including smartphones and smart glasses, opens new horizons for the application of AI. It can serve not only to execute commands, but also to provide contextual information and help with a variety of tasks in any environment.
At this time, there is no definite information on when Project Astra will be launched to the general public, which is normal practice for early versions of products of this scale. Google is likely to conduct a number of additional tests and improvements before introducing the product to end users to ensure the best possible and most reliable user experience.
Ailib neural network catalog. All information is taken from public sources.
Advertising and Placement: [email protected] or t.me/fozzepe