If we ever want AI to run on-chain without catching the network on fire, we need to stop trying to prove things just because we can.
@inference_labs is taking a "less is more" route by focusing on anomaly detection and safety boundaries.
To me, this is the only way zkML scales for real finance or healthcare apps.
You don't need a proof for the whole black box; you just need to know the box didn't violate its safety policy.
Are we over-engineering ZK proofs for AI right now?
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
If we ever want AI to run on-chain without catching the network on fire, we need to stop trying to prove things just because we can.
@inference_labs is taking a "less is more" route by focusing on anomaly detection and safety boundaries.
To me, this is the only way zkML scales for real finance or healthcare apps.
You don't need a proof for the whole black box; you just need to know the box didn't violate its safety policy.
Are we over-engineering ZK proofs for AI right now?