Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Post-Training Evolution in V4: OPD Replaces Mixed RL, Distilling Multiple Expert Models into One
According to monitoring by Beating, the post-training methodology of DeepSeek V4 has undergone significant changes: the mixed RL phase of V3.2 has been completely replaced by On-Policy Distillation (OPD). The new process consists of two steps. In the first step, domain expert models are trained in areas such as mathematics, coding, agent behavior, and instruction following, based on the V3.2 pipeline. Each expert undergoes fine-tuning followed by reinforcement learning using GRPO. In the second step, a multi-teacher OPD distills the capabilities of over ten experts into a unified model: the student performs reverse KL divergence logit distillation on the full vocabulary for each teacher based on its own generated trajectories, aligning logits to merge multiple expert weights into a unified parameter space, thus avoiding the capability conflicts commonly seen in traditional weight merging and mixed RL. The report also introduces the Generative Reward Model (GRM): for tasks that are difficult to validate with rules, instead of training a traditional scalar reward model, RL data guided by rubrics is used to train the GRM, allowing the actor network to simultaneously generate and evaluate, enabling generalization to complex tasks with a small amount of diverse human annotations.