Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI-compliant assets, Quantum is "re-evaluating" them
Article by: Zhang Feng
Today, artificial intelligence is being integrated into social production and daily life with unprecedented depth, and its security and governance framework forms the cornerstone of the digital age. However, a computing-power revolution rooted in physical principles—quantum computing—is quietly drawing near. Its potential disruptive force is putting existing security perimeters and governance frameworks under severe scrutiny. Will quantum computing overturn today’s AI security and governance systems? This is not only a technical issue, but a global challenge concerning the future order of digital society. When a leap in computing power meets rule lag, how do we make plans for a “Q-Day”?
The security of today’s AI systems, from model transmission and data storage to identity authentication, heavily relies on asymmetric encryption algorithms represented by RSA and ECC (elliptic curve cryptography). The security of these algorithms is built upon the “computational complexity” of mathematical problems such as “integer factorization” or “discrete logarithms,” which classical computers cannot solve within an acceptable amount of time.
However, quantum computing brings a fundamental shift in paradigm. Quantum algorithms represented by the Shor algorithm can, in theory, reduce the time required to solve these problems from exponential to polynomial. A paper review points out that, including the latest quantum algorithms such as the Regev algorithm and its extensions, are continuously optimizing the efficiency of cracking asymmetric cryptography. This means that once a sufficiently large (typically referring to having millions of stable qubits) general-purpose quantum computer comes into being, the “locks” currently protecting internet communications, digital signatures, and encrypted data may be opened instantly.
This threat is not far off. Research from the Zhiyuan community warns that it is a “now in progress” threat: attackers can intercept and store encrypted communication data now (including AI training data, model parameters, etc.), then wait for future quantum computers to mature before decrypting. This “intercept first, decrypt later” strategy exposes all high-value information that needs long-term confidentiality—including national secrets, commercial patents, and personal privacy data—to future risk. Therefore, the threat of quantum computing to asymmetric encryption is fundamental and systemic, directly undermining the foundation of today’s AI—and even the security system of the entire digital world.
The development of AI depends on feeding massive amounts of data and training complex models; this process is itself full of privacy and security challenges. The introduction of quantum computing makes these challenges even sharper and more complex.
First, long-term confidentiality over the data lifecycle fails. As mentioned earlier, the AI training datasets currently encrypted and stored in the cloud or in transit may be completely exposed through future quantum decryption. A white paper from Xi’an Jiaotong-Liverpool University’s global anti-quantum migration strategy clearly states that, worldwide, adversaries are organizing to carry out this “data harvesting” strategy, patiently waiting for the arrival of “Q-Day” (the day quantum computers become practical). This poses a source-level threat to AI models that rely on sensitive data (such as medical records, financial information, and biometric features) for training.
Second, privacy computation technologies such as federated learning face new tests. Federated learning protects raw data by training models locally and only exchanging model parameter updates. However, the information contained in these exchanged gradients or parameter updates is itself encrypted in transit. If underlying encryption is broken by quantum computing, attackers can infer the original data characteristics of participating parties in reverse, rendering privacy protection mechanisms effectively meaningless.
Finally, the difficulty of model theft and intellectual property protection escalates. Trained, mature AI models are a company’s core assets. At present, model weights and architectures are typically distributed and deployed through encryption. Quantum computing may render these protections ineffective, allowing models to be copied easily, reverse-engineered, or tampered with—leading to serious intellectual property infringement and security vulnerabilities. In its “Blue Book on AI Governance,” the China Academy of Information and Communications Technology emphasizes that AI governance must address risks such as technology abuse and data security; quantum computing undoubtedly amplifies the destructive power of these risks.
The combination of quantum computing and AI—quantum machine learning (QML)—signals a new round of performance breakthroughs. But at the same time, it brings unprecedented new safety and ethics issues that challenge existing review frameworks.
On the security front, QML may give rise to more powerful attack tools. For example, quantum algorithms may greatly accelerate the generation of adversarial samples, producing attacks that are more covert and more destructive, causing today’s AI security defenses based on classical computing (such as adversarial training and anomaly detection) to quickly become outdated. Some analysis calls “quantum + AI” the next battlefield of life and death in cybersecurity, noting that it is necessary to proactively improve relevant regulatory frameworks.
On the ethical front, QML’s “black box” characteristics may be even more arcane than those of classical AI. Its decision-making process is based on quantum superposition and entangled states, which may make it more difficult to explain, audit, and hold accountable. There has already been extensive discussion of ethical disputes and risks brought by QML, including algorithmic fairness, responsibility assignment, and technical controllability. How can existing AI ethical guidelines (such as transparency, fairness, and accountability) be implemented at the quantum scale? How should regulators review a decision model built on quantum circuits that may exist as a superposition of multiple states? These are all hard problems that existing ethical review frameworks are not yet prepared to handle. Governance models need to shift from mere technical compliance toward a deeper understanding of quantum properties themselves and their societal impacts.
Current AI and data governance regulations, represented by the European Union’s General Data Protection Regulation (GDPR), have core principles such as “privacy by design and default,” “data minimization,” “storage limitation,” and “integrity and confidentiality,” which still provide guidance at the conceptual level. However, in terms of specific technical implementations and compliance requirements, they are facing “compliance gaps” caused by quantum computing.
GDPR requires data controllers to take appropriate technical and organizational measures to ensure data security. But in the context of quantum threats, what constitutes “appropriate” encryption measures? Continuing to use algorithms proven to be quantum-unsafe may likely be deemed, in the future, as a failure to fulfill security assurance obligations. When facing advanced attacks launched using quantum computing that may complete instantly and leave no trace, how can the regulation’s time-limit requirements for notifying data breaches be effectively executed?
Lawmakers worldwide have already recognized the necessity of change. The “2025 Global Artificial Intelligence Governance Report” shows that countries are accelerating the drafting of specialized AI governance laws and establishing high-level coordinating bodies. In China’s “Report on the Development of Digital China (2024),” it emphasizes the need to “accelerate the improvement of data foundational institutional arrangements,” and to continuously advance the “AI+” initiative. These developments indicate that governance systems are actively adjusting. However, regulations specifically targeting the intersection area of “quantum computing + AI” are, for the time being, still largely blank. Existing regulations lack provisions on specific issues such as post-quantum cryptography migration timelines, QML model audit standards, and data security level classification in the quantum era, making it difficult to effectively address the security changes that are coming.
The most direct technical approach to counter quantum threats is post-quantum cryptography (PQC). PQC refers to cryptographic algorithms that can resist attacks by quantum computers. It is not based on quantum principles, but rather on new mathematical problems that are believed to remain difficult for even quantum computers to solve quickly (such as lattice problems, coding problems, multivariate problems, etc.).
There are broad and urgent application prospects for PQC in AI systems. PQC can be used to protect every step of AI workflows: encrypt training data and model files using PQC algorithms; use PQC digital signatures to verify the integrity and authenticity of model sources; and establish PQC-secured communication channels between distributed AI computing nodes. Fortinet points out that PQC is not a distant concept, but a practical solution urgently needed to protect digital systems from potential quantum threats.
However, comprehensive implementation of PQC faces significant challenges:
Performance and compatibility challenges: Many PQC algorithms have key sizes, signature lengths, or computational overheads that are far greater than current algorithms. Integrating them into AI training and inference processes that are sensitive to computational efficiency and latency may create performance bottlenecks. At the same time, it is necessary to upgrade all related hardware, software, and protocol stacks to ensure compatibility.
Complexity of standards and migration: Although institutions such as the U.S. NIST are advancing PQC standardization efforts, it still takes time to finalize the ultimate standards and achieve global unification. The “Frontier Dynamics in Commercial Cryptography” published by Beijing’s MIIT security administration shows that the industry is actively open-sourcing implementations of NIST candidate algorithms to help industries respond to threats. The entire migration process is a vast and complex systems engineering effort, involving risk assessments, algorithm selection, hybrid deployment, testing, and comprehensive replacement—especially challenging for AI ecosystems with complex structures.
New security risks: PQC algorithms themselves are a relatively new research area, and their long-term security has not yet undergone the kind of real-world cryptanalysis testing over decades that RSA has experienced. Deploying PQC with unknown vulnerabilities in AI systems on a rushed timeline is itself a risk.
The disruptive impact of quantum computing on today’s AI security and governance systems is real and imminent. It does not completely overturn the current systems; rather, by dismantling their cryptographic foundations, amplifying data risks, complicating ethical issues, and highlighting the lag in regulations, it forces the entire system to undergo a profound, forward-looking upgrade.
In the face of this transformation, waiting passively for “Q-Day” is dangerous. We recommend taking the following actionable paths:
Launch quantum security risk assessments and create inventories: Immediately conduct quantum threat assessments for core AI assets (especially models and data involving long-term sensitive data), identify the most vulnerable components, and establish a migration priority checklist.
Formulate and implement PQC migration roadmaps: Pay attention to progress by standardization bodies such as NIST, and begin planning PQC integration during the development and operations of AI systems. Prioritize adopting a “cryptographic agility” design in new and critical systems to facilitate seamless replacement of cryptographic algorithms in the future. Consider adopting a transitional hybrid encryption mode of “classical + PQC” based on what is being used currently.
Promote adaptive updates to governance frameworks: Industry organizations, standardization bodies, and regulators should collaborate to study and incorporate quantum-resistance requirements into AI security standards, data protection regulations, and product certification systems. Establish a research framework and guidelines in advance for the ethical review of QML.
Strengthen cross-disciplinary talent development and research: Cultivate interdisciplinary talent who understands both AI and quantum computing and cryptography. Encourage the inclusion of quantum threat models in AI security research, and fund the development of anti-quantum AI security technologies.
The challenges brought by quantum computing are immense, but it also provides an opportunity for us to re-examine and reinforce the foundations of the digital world. Through proactive planning, collaborative innovation, and agile governance, we can fully build a more resilient AI future—one that can embrace the benefits of quantum computing performance while also withstanding its security risks.