Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Constitution for AI: How Anthropic Sets a New Standard for Safety
Anthropic recently introduced a significantly updated version of its Claude Constitution, making this document publicly available under the most liberal license Creative Commons CC0 1.0. This means that researchers and companies can now freely use, modify, and distribute this document without any restrictions. According to PANews, the Constitution serves as a guiding standard for training models aimed at generating synthetic data and evaluating response quality.
From Principles to Practice: The Evolution of the Claude Constitution
The most important change in the updated version is the shift from a simple list of rules to a deep explanation of their reasons and justifications. This approach allows models not just to mechanically follow principles but also to better understand their meaning. It significantly improves the system’s ability to generalize acquired knowledge to new, unseen situations.
The document sets clear priorities: broad safety, deep ethics, strict adherence to guidelines, and genuine user assistance. It also defines “impermeable boundaries” — deliberately refusing to assist in the development of biological weapons, synthesis of dangerous substances, and other critical risk scenarios.
How the Constitution Shapes Model Behavior
The structure of the document goes far beyond a typical list of prohibited actions. It includes sections on seeking virtues, maintaining users’ psychological safety, and developing self-awareness within the model. Each element aims for Claude not just to execute commands but also to demonstrate responsible behavior in the context of complex moral issues.
An important aspect is the emphasis on transparency and continuous iteration. Anthropic does not see the Constitution as a static document but as a living, evolving tool. The company seeks feedback from the community and scientists, constantly improving standards.
Open License as a Catalyst for Change in AI Safety
Deciding to make the document open under CC0 carries symbolic and practical significance. It signals Anthropic’s confidence in its approach and willingness to share it with the broader scientific community. Other companies and developers can now adapt this Constitution for their systems, creating an ecosystem of safer and ideologically aligned AI models.
Such openness also supports the industry’s commitments to transparency in artificial intelligence. Instead of hiding its methods, Anthropic actively demonstrates how it defines and implements the ethical principles of the Constitution. This could become a benchmark for the industry, where discussions of safety and ethics often remain private company matters.