AI

OpenAI’s new agent can compile detailed reports on practically any topic

1 Mins read

OpenAI claims the tool represents a significant step towards its overarching goal of developing artificial general intelligence (AGI) that matches (or surpasses) humans. It says that what takes the tool “tens of minutes” would take a human many hours.

In response to a single query, such as “draw me up a competitive analysis between streaming platforms,” Deep Research will search the web, analyze the information it encounters, and compile a detailed report which cites its sources. It’s also able to draw from files uploaded by users.

OpenAI developed Deep Research using the same chain of thought reinforcement learning methods it used to create its o1 multistep reasoning model. But while o1 was designed to focus primarily on mathematics, coding, or other STEM-based questions, Deep Research can tackle a far broader range of subjects. It can also adjust its responses as it goes in reaction to new data it comes across in the course of its research.

This doesn’t mean that Deep Research is immune to the same pitfalls as other AI models. OpenAI says the agent can sometimes hallucinate facts and present its users with incorrect information, albeit at a “notably” lower rate than ChatGPT. And because each question may take between five and 30 minutes for Deep Research to answer, it’s very compute intensive—the longer it takes to research a query, the more compute required.

Despite that, Deep Research is now available at no extra cost to subscribers to OpenAI’s paid Pro tier, and will soon roll out to its Plus, Team, and Enterprise users.


Source link

Related posts
AI

OpenAI Introduces Deep Research: An AI Agent that Uses Reasoning to Synthesize Large Amounts of Online Information and Complete Multi-Step Research Tasks

2 Mins read
OpenAI has introduced Deep Research, a tool designed to assist users in conducting thorough, multi-step investigations on a variety of topics. Unlike…
AI

Researchers from University of Waterloo and CMU Introduce Critique Fine-Tuning (CFT): A Novel AI Approach for Enhancing LLM Reasoning with Structured Critique Learning

3 Mins read
Traditional approaches to training language models heavily rely on supervised fine-tuning, where models learn by imitating correct responses. While effective for basic…
AI

Bio-xLSTM: Efficient Generative Modeling, Representation Learning, and In-Context Adaptation for Biological and Chemical Sequences

2 Mins read
Modeling biological and chemical sequences is extremely difficult mainly due to the need to handle long-range dependencies and efficient processing of large…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *