Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Rivals Come to Help: Over 30 OpenAI and Google Employees Support Anthropic's Lawsuit Against US Government
Anthropic Sues U.S. Government
TechCrunch, March 10, Beijing time — According to Wired magazine, more than 30 employees from OpenAI and Google submitted a friend-of-the-court brief on Monday to support Anthropic in its legal battle against the U.S. government, including Jeff Dean, Chief Scientist at DeepMind, a Google subsidiary.
Just a few hours ago, Anthropic filed a lawsuit against the U.S. Department of Defense and other federal agencies, challenging their decision to classify the company as a “supply chain risk.” This punitive measure by the U.S. government restricts Anthropic’s ability to collaborate with military contractors and took effect after negotiations with the Pentagon broke down. Anthropic is seeking a temporary restraining order to continue working with military partners during the lawsuit. The friend-of-the-court brief submitted by OpenAI and Google employees aims to support this motion.
“If this action (designating as a supply chain risk) is allowed to proceed, it will undoubtedly impact the United States’ industrial and scientific competitiveness in AI and other fields, as it attempts to punish one of the country’s leading AI companies,” the brief states.
Signatories of the brief include DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak. Friend-of-the-court briefs are legal documents submitted by parties not directly involved in the case but with relevant expertise. The brief notes that these employees signed in their personal capacity and do not represent their companies’ views.
The brief argues that the Pentagon’s decision to blacklist Anthropic “creates unpredictability in their industry, undermining U.S. innovation and competitiveness,” and “stifles professional discussions about the benefits and risks of cutting-edge AI systems.” It also states that if the Pentagon no longer wishes to be bound by contractual terms, it could have directly terminated its contracts with Anthropic.
Furthermore, the brief emphasizes that Anthropic’s claimed “red lines” are reasonable concerns, including that AI should not be used for large-scale domestic surveillance or to develop autonomous lethal weapons, and that sufficient safety measures are necessary. “In the absence of public legal standards, restrictions imposed by AI developers through contracts and technical means are vital safeguards against catastrophic misuse of these systems,” the brief concludes.
As of press time, OpenAI and Google have not commented on this matter. (Author: Xiao Yu)