Announcing DeepProve: zkML to Keep AI in Check

March 12, 2025

The future of AI is ZK. The future of humanity is Lagrange.

Lagrange is announcing the launch of DeepProve: a groundbreaking zkML library that makes verifiable AI inferences fast and scalable like never before — up to 158x faster than the leading zkML to date.

Superintelligence is inevitable. Today, AI algorithms operate in a black box—all we know is that AI models solve for objectives by minimizing error, which can result in unintended outcomes. As we become more dependent on AI, we must ensure that we are able to verify how AI models arrive at their conclusions in order to a) prevent harmful outcomes and b) ensure that the actions of artificial superintelligence align with our best interests.

With DeepProve, we can now prove that AI inferences are correct, verifying that AI models are producing results according to their expected functions (and in the best interests of humanity).

Superintelligence is Inevitable

AI is accelerating, and it will not stop.

Chatbots are getting eerily human-like. Deepfake videos are becoming increasingly difficult to spot. Key decisions in important fields — from healthcare to law enforcement, from driving to mission-critical military assignments — are being made based on the outputs of AI models. Yet, the models themselves are black boxes to the people who use them—including their own creators. Day by day, we are becoming more dependent on technology we cannot understand.

Leading AI scientists are certain that we are advancing towards the creation of artificial superintelligence (ASI), a form of AI that surpasses human intelligence across the board. We already know that AI can hide its true intentions — as demonstrated when an AI developed distinct plans for passing safety tests vs. how it performed in deployment, showing how advanced systems can use deception to accomplish objectives. A future where AIs deliberately mislead us in the pursuit of their own self-preservation for control is not science fiction — it is at our doorstep.

Some examples of where an AI system could become a threat to humanity are increasingly obvious:

AI governance — or the capacity to retain control over these budding intelligent entities — is woefully inadequate. This is because it has been impossible to discern how an AI model arrives at a conclusion; their outputs can be seen, but the logic by which those outputs were reached cannot. In other words, we can see what an AI tells us, but we cannot see why it tells us that. We’re left to wonder: is this AI serving its own goals, or mine? 

Today’s systems force us to trust AI outputs without knowing the logic behind them.
You cannot trust what you cannot verify. 

As humanity barrels towards the emergence of superintelligence, how will we ensure that it acts in our benefit? How will we ensure that it doesn’t wipe us all out?

The short answer is verifiability. AI development needs to be open, decentralized, and verifiable today if we want to maintain control over it tomorrow. We need to make sure that human interests are protected from day one by being able to guarantee that AI is operating as we expect it to. We need to replace blind trust with objective guarantees that the technology that is dominating more and more of our lives will continue to serve our best interests into the future.

Lagrange is taking a huge step towards ensuring this future for humanity. Lagrange’s zkML—DeepProveis a new tool for verifiable AI inferences. This is a watershed moment for AI, a watershed moment for ZK, a watershed moment for crypto, and a watershed moment for humanity.

What is zkML?

In simple terms, zkML is a combination of zero-knowledge proofs (ZKPs) and machine learning (ML) that allows computations on ML models to be verified. Lagrange’s DeepProve is able to provide developers with the right tools to implement this in the fastest, most efficient, scalable manner possible.

Effectively, zkML allows anyone to take an ML model and prove that:

  1. It is the right ML model
  2. It arrives at the right results

With zkML, we can prove AI model inferences cryptographically, removing trust assumptions and ensuring that AI systems execute according to human interest. This ensures that its decisions are made according to a correct model that is aligned with human intent. We no longer have to rely on blind trust in AI black boxes.

ZK is an essential tool in keeping AI under control. DeepProve makes it easy to prove AI model inferences with ZK. 

This is the only way that we, humanity, can ensure that superintelligence develops in a way that serves — rather than endangers — our livelihood.

DeepProve: Lagrange’s zkML—Up to 158x Faster Than the Competition

DeepProve is a zkML library that generates proofs for the following: the inferences of Multilayer Perceptrons (MLPs) and the inferences of popular convolutional neural networks (CNNs).

The developer workflow will look as follows:

  1. Model Training: The developer trains a neural network and exports it as an ONNX file.
  2. Preprocessing: The executable parses the ONNX file, generates circuits, and prepares prover/verifier keys for the SNARK proof system.
  3. Proving: A prover runs the SNARK prover to compute inferences and generate proofs for given inputs.
  4. Verification: Any verifier can validate these proofs to confirm the correctness of outputs.

DeepProve is currently 54x - 158x faster at generating proofs than EZKL, the leading zkML to date. As the size and complexity of the ML model increases, so too does the performance of Lagrange’s DeepProve over EZKL. When compared to EZKL running on GPUs, our proof verification times are up to 671x faster for MLPs and 521x faster for CNNs, bringing verification down to just half a second, making it practical for real-world applications. 

As we scale to models with millions of parameters, DeepProve will become even faster. Further optimizations—including better parallelization, distributed proving, advanced commitment schemes, GPU/ASIC enhancements, and reduced GKR overhead—will push scalability even further.

DeepProve is up to 158x faster at generating proofs and up to 671x faster at verifying proofs than EZKL.

Where DeepProve’s Impact Starts

Lagrange is working with a variety of leading projects in AI and Web3 to accelerate the adoption of this paradigm shift in the evolution of AI — verifiable AI. zkML will be useful for many crypto use cases, including AI-generated trait evolution for collectibles, provenance verification for NFTs, and verifiable AI model integration for smart contracts, to name a few.

Provable AI computations will pioneer innovation in other industries as well: zkML could be used for generating private AI inferences, secure collaborative model training, and verifiable agent actions — all of which are vital to powering broader everyday AI use cases with the verifiability needed to ensure AI alignment.

Follow Lagrange on X to stay updated on new developments.