Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Qualcomm Demonstrates AI200 Inference Rack: Integrating Proprietary Accelerators with AMD CPUs
IT House, March 3 — According to the official Qualcomm blog and a现场报道 by German media ComputerBase, Qualcomm showcased a physical prototype of its AI200 rack-mounted AI inference solution at MWC26 in Barcelona. The product is expected to be commercially available in the second half of this year.
IT House learned that each AI200 rack has a total height of 51U, consisting of 7 sets of 5U high systems. Each 5U system contains 4U for installing AI200 acceleration cards, with 2 cards deployed inside a single 1U tray; the remaining 1U is equipped with 2 AMD EPYC Milan processors. For connectivity, PCIe is used within small ranges, while 800G Ethernet is used for larger-scale connections.
Overall, a single AI200 rack includes 56 AI200 acceleration cards, with a total memory capacity of 43TB across all cards; additionally, the rack contains 14 AMD EPYC server processors.
According to German media reports, Qualcomm’s AI250 rack system in 2027 will still use AMD processors on the head node. The data center products introduced in 2028 will include the AI300 system and self-developed CPUs.