Microsoft has updated Copilot by introducing a new OpenAI model, GPT-4 Turbo, improving capabilities and functions for creating images with DALL-E 3, and creating a code interpreter for programmers.
The technology giant continues to work on the development of new capabilities focused on AI and generative AI, to continue progressing in relation to all the innovations they have presented in the last ten months,” said a reflecting years of AI research, close collaboration and revolutionary innovation.
In all of this, Microsoft stated that 2023 “will be remembered as the year in which AI fully establishes itself in everyday life, changing the way we work and approach everyday tasks.” In this sense, it announced a series of new features included in its AI-powered assistant Copilot, where it aims to offer users “the best way to take advantage of the benefits of AI”, as which it highlighted in a statement on its website. Blog.
One of these innovations is the inclusion of the latest OpenAI model, GPT-4 Turbo, in the Copilot chatbot, to allow users to generate responses with “more detailed” context. In addition, with this new model, the assistant will be able to deal with “longer and more complex” tasks, for example, to create its own programming code.
The GPT-4 Turbo model, which was just launched in November, is a “more capable” and “more economical” model than its predecessor. Also, it supports a context window – that is, the prompts entered by users – of 128K, instead of 32K. In other words, the new model can contain “the equivalent of more than 300 pages of text in one message.”
Another feature that improves the GPT-4 Turbo model for Copilot is that it is updated with new information until April 2023. Currently, Microsoft details that this improvement is being tested by “select users ” and that it will be fully integrated in the coming weeks.
In this way, through Copilot, users will be able to access all the capabilities offered by the model that, previously, were only available to those using the OpenAI payment API.
With the integration of the GPT-4 Turbo model, Microsoft announced the Deep Search functionality of Bing, where taking advantage of the power of the new OpenAI model, it will offer search results optimized for “complex topic.” For example, with more complete descriptions.
Microsoft has also implemented new features of the ability to create “more accurate” and “higher quality” images thanks to the integration of the DALL-E 3 generative AI model, which OpenAI launched in September.
This model is able to understand many nuances and details of text transcriptions, therefore, it can “easier” translate the ideas of users into accurate images. Thus, users can now ask Copilot to create images using DALL-E 3.
Following this line, Microsoft has combined the capabilities of the GPT-4 model with Bing’s image search to “provide greater understanding of images for queries.”
On the other hand, Microsoft announced that it is developing a new capability that will allow complex tasks such as calculations, coding, data analysis, visualization and mathematics to be performed “with greater precision” and deliver the results directly to users.
It is a code interpreter tool where Copilot can write code to answer complex user requests written in natural speech. Likewise, it will run the code in an isolated environment and, after obtaining the results, offer them to users in natural language to provide higher quality answers.
Likewise, Microsoft also detailed that with this code translator, users can also upload files to Copilot to use their own data and code, in addition to Bing search results.