DeepSeek introduces new method for enhancing reasoning abilities of large language models

6
DeepSeek introduces new method for enhancing reasoning abilities of large language models
DeepSeek introduces new method for enhancing reasoning abilities of large language models

Africa-Press – Mozambique. Chinese artificial intelligence (AI) start-up DeepSeek has introduced a new method for enhancing the reasoning abilities of large language models (LLMs), reportedly surpassing current approaches.

DeepSeek developed a dual technique that combines generative reward modeling (GRM) and self-principled critique tuning in collaboration with researchers from Tsinghua University, the South China Morning Post reported on Sunday.

This dual method is designed to enable LLMs to provide more accurate and faster responses to general queries, according to a paper published Friday.

The researchers said the resulting DeepSeek-GRM models outperformed existing techniques, achieving “competitive performance” with robust public reward models. Reward modeling is a process used to align an LLM’s behavior with human preferences.

DeepSeek plans to make its GRM models open source, the researchers said, although no specific timeline was given.

The paper, published on the online scientific repository arXiv, comes amid growing interest in the company’s future developments, following the global attention drawn by its V3 foundation model and R1 reasoning model.

For More News And Analysis About Mozambique Follow Africa-Press

LEAVE A REPLY

Please enter your comment!
Please enter your name here