Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Don't mess with my wallet: Apple research shows users dislike AI being overly clever
IT House, February 13 — Apple’s Machine Learning Research Team published a paper on February 7 titled “Mapping the Design Space of User Experience for Computer Use Agents,” focusing on revealing users’ true expectations and interaction preferences for AI agents.
The researchers noted that despite significant market investment in developing AI agents, exploration of interface forms and interaction logic remains insufficient. To address this, the team analyzed existing products and conducted field user testing to clarify design standards in this emerging field.
In the first phase of the study, IT House, citing a blog post, reported that the team thoroughly analyzed nine mainstream desktop and mobile AI agents, including Claude Computer Use, OpenAI Operator, and AutoGLM.
The study involved consulting eight experienced practitioners and constructing a classification system with four key dimensions: “User Commands,” “Activity Explainability,” “User Control,” and “Mental Models.” This system covers the entire process from how users issue commands to how AI displays its operation plans, reports errors, and transfers control.
In the second phase, the study employed the classic “Wizard of Oz” method. The team recruited 20 users experienced with AI, asking them to complete vacation rental or online shopping tasks via chat interface.
To eliminate technical failures and capture genuine psychological reactions and behavioral patterns when facing AI decisions, Apple used real-person simulated AI operations (including intentional errors or deadlocks). Users were unaware that the “AI” behind the screen was actually a researcher in the next room.
Results showed that users have nuanced needs for “transparency”: they want to understand AI’s actions but reject micromanaging every step, as that would defeat the purpose of using intelligent agents.
These needs vary by scenario: in exploratory or unfamiliar tasks, users desire more intermediate steps and explanations; in high-risk situations (such as payments or account modifications), users demand absolute confirmation rights.
The study emphasizes that trust is the foundation of human-computer interaction but is extremely fragile. When AI agents make unilateral decisions without asking in ambiguous situations (silent assumptions), or deviate from plans without informing users, trust can quickly collapse.
After encountering uncertainty, users do not want AI to make random choices in pursuit of “automation.” Instead, they prefer AI to pause and seek clarification, especially when such choices could lead to purchasing errors or other tangible losses.
IT House Reference Links
Apple Official Website: Mapping the Design Space of User Experience for Computer Use Agents
Arxiv: Mapping the Design Space of User Experience for Computer Use Agents