What Is DeepSeek? A Look at China’s Entry Into the LLM World

DeepSeek

Insider Brief

  • DeepSeek-V3, an open-source large language model with 671 billion parameters, combines cutting-edge Mixture-of-Experts architecture and Multi-Token Prediction to deliver efficient, high-performance AI capabilities.
  • Trained on a 14.8-trillion-token dataset, it outperforms many open-source competitors and rivals closed models like GPT-4 on specialized benchmarks, achieving these results at a fraction of the training cost.
  • Available on GitHub and priced competitively, DeepSeek-V3 democratizes advanced AI technologies but still lags behind leading closed models in general reasoning and long-context applications.

China’s AI ambitions have taken a significant leap forward with the release of DeepSeek-V3, an ultra-large language model from the Chinese AI startup DeepSeek. Known for its open-source approach and cutting-edge innovation, DeepSeek aims to challenge global leaders in artificial intelligence. Its latest offering, DeepSeek-V3, is already being hailed as one of the strongest open-source large language models (LLMs) available, according to a report from VentureBeat and details shared in the DeepSeek technical paper.

DeepSeek-V3 employs a Mixture-of-Experts (MoE) architecture, boasting a colossal 671 billion parameters, although only 37 billion are activated per token, VentureBeat reports. This selective activation ensures cost-effective training and inference without compromising accuracy, as detailed in the DeepSeek technical report.

The architecture combines multi-head latent attention (MLA) with the DeepSeekMoE framework, optimizing performance across diverse tasks.

In MLA, attention queries, keys and values are compressed into latent representations to reduce memory and computational overhead. This makes the model more efficient during inference by reducing the amount of information it needs to store and process without sacrificing accuracy.

According to the DeepSeek technical paper, the model was pre-trained on a dataset of 14.8 trillion tokens — a mixture of high-quality text sources — and further refined through supervised fine-tuning and reinforcement learning. These steps allowed DeepSeek-V3 to align its responses with human preferences while excelling in reasoning and factual accuracy​. DeepSeekMoE framework divides the model into smaller, specialized sub-networks called “experts,” and dynamically activates only a subset of these experts for each input. The framework includes advanced features like load balancing (avoiding overloading specific experts) and cost-efficient routing, which ensure the model runs efficiently while maintaining high performance

The VentureBeat article highlights two groundbreaking advances introduced in DeepSeek-V3 that elevate its capabilities. The first is its auxiliary-loss-free load-balancing strategy, which dynamically adjusts the routing of tasks to balance computational loads while maintaining high performance. This innovation avoids the trade-offs typically associated with load balancing in MoE models​. The second is Multi-Token Prediction (MTP), a feature that enables the model to predict multiple tokens simultaneously. According to DeepSeek’s technical paper, this innovation enhances training efficiency and accelerates text generation to a rate of 60 tokens per second.

The DeepSeek technical paper also provides details on benchmark performance. On various industry-standard tests, DeepSeek-V3 outperformed open-source competitors such as Meta’s Llama-3.1-405B and even rivaled closed models like OpenAI’s GPT-4 in several areas​. Its performance on Chinese-specific and math-centric tests has been particularly noteworthy, with scores surpassing other models on benchmarks like Math-500 (90.2 vs. the next-best 80). However, as VentureBeat notes, DeepSeek-V3 slightly lags behind Anthropic’s Claude 3.5 Sonnet in some English-language and general reasoning benchmarks, such as MMLU-Pro and GPQA-Diamond.

Training efficiency is another area where DeepSeek has made strides. The DeepSeek technical report reveals that the entire training process for the V3 model required just 2.788 million H800 GPU hours, amounting to an estimated $5.57 million. By contrast, Meta’s Llama-3.1 reportedly cost over $500 million to train, which points to DeepSeek’s achievement in optimizing resource utilization. The cost savings, according to the technical paper, stem from innovative engineering choices, including FP8 mixed-precision training, which reduces memory usage and accelerates computations. Additionally, the DualPipe algorithm minimizes communication overhead in distributed training systems, allowing for near-zero latency during cross-node operations​.

As reported by VentureBeat, DeepSeek-V3 is available on GitHub under the company’s model license, offering enterprises access to an API priced competitively at $0.27 per million input tokens. The model is also integrated into DeepSeek Chat, a ChatGPT-like platform for testing its capabilities.

The release of DeepSeek-V3 is closing the gap between open-source and closed-source AI systems. As noted in the technical paper, by delivering near-equivalent performance at a fraction of the cost, DeepSeek-V3 challenges the dominance of Western AI giants and democratizes access to cutting-edge AI technologies​.

Despite its success, DeepSeek-V3 is not without limitations. VentureBeat points out that its performance still falls short in some general-purpose tasks compared to leading closed models like GPT-4 and Claude 3.5 Sonnet. Additionally, as the race for artificial general intelligence (AGI) accelerates, the DeepSeek technical paper suggests that the company will need to address these gaps while innovating further. Future iterations may also include enhanced support for long-context applications, which DeepSeek-V3 already partially addresses with a maximum context length of 128,000 tokens​.

DeepSeek-V3 exemplifies how innovation in architecture and training methodologies can redefine the boundaries of what open-source AI can achieve. By delivering state-of-the-art performance with economical training costs, DeepSeek has positioned itself as a formidable contender in the global AI landscape, according to both VentureBeat and the DeepSeek technical paper.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape