Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Something very important I noticed: the real war on artificial intelligence isn't about the chips themselves, but about something much deeper called CUDA. This system from Nvidia has captured 90% of the global developers, making everyone dependent on their environment.
But in recent years, we've seen a radical shift. Chinese companies didn't try direct confrontation; instead, they chose a completely different path: a revolution in algorithms. From late 2024 to 2025, Chinese companies collectively moved toward hybrid expert models — a simple but powerful idea: dividing the large model into smaller experts, activating only what is truly needed.
DeepSeek V3 is a clear example: 671 billion parameters but only 37 billion are active at any time. The cost? $5.6 million compared to $78 million for GPT-4. The difference in algorithms directly reflected in the price — 25 to 75 times cheaper than Claude.
The result is shocking: in February 2026, the use of Chinese models on OpenRouter increased by 127% in just three weeks, surpassing the United States for the first time. From 2% to 60% in one year.
But the real problem was training, not inference. And here came the second solution: local chips. In 2025, China launched a full local production line using Loongson processors and Taichu AI cards. After just a few months, they began training large-scale models on them. In January 2026, Zhipu AI released the first advanced image model trained entirely on Chinese local chips.
This is a qualitative shift: from "inference capability" to "training capability." The difference is enormous.
Now, while the United States faces a real electricity crisis — data centers consume 4% of electricity, potentially reaching 12% by 2030 — China has a huge energy advantage: it produces 2.5 times what America does, and industrial electricity costs are 4-5 times lower.
What comes out of China now isn't products or factories, but tokens — the tiny units processed by AI models. They are produced in Chinese computing factories, then transmitted via cables to the world.
DeepSeek is now in 37 languages, 26,000 global companies have accounts, and 58% of new startups have adopted it. In China alone: 89% of the market share.
This reminds me of the semiconductor war with Japan forty years ago. But this time, China is building an entirely independent ecosystem — something Japan never did. From optimized algorithms, to local chips, to 4 million developers in the Ascend system, to a global distribution of services.
The price is high — local companies are losing billions building this system. But these are not management losses; they are a necessary war tax.
The landscape has changed: eight years ago, we asked, "Can we survive?" Today, the question is, "What is the price we need to pay to survive?" The same answer signifies progress.