Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
88% of companies have experienced AI agent security incidents. But only 22% treat agents as "identities" to manage them.
Okta CEO Todd McKinnon appeared on The Verge and mentioned something that caught my attention:
AI agents shouldn't just be tools; they should have their own identities. Log in, authenticate, and leave logs just like employees do.
Here's the background.
Currently, more and more AI agents are in enterprises, capable of accessing databases, calling APIs, and sending emails. But most companies still manage agents using the creator's account permissions.
What does this mean? If an agent causes an issue, you have no idea who authorized it, what it did, or when it did it.
McKinnon's logic is: agents need independent identities, independent permissions, independent logs, and a kill switch. If an agent behaves abnormally, it can be shut down with one click.
I believe that agent identity will become a core topic for enterprise AI in the second half of 2026.
Whoever gets this infrastructure right first will be the tollbooth for the next round of AI infrastructure development.