Making Hard Math Provable: Introducing Efficient Proofs for Non-Polynomial Functions
February 23, 2026

Zero-knowledge proofs have become one of the most powerful tools in cryptography. They let you prove a computation was done correctly without revealing private inputs. That is why ZK is used for privacy, compliance, and scaling systems where trust cannot be assumed. Yet there is still a practical obstacle that ZK technology faces: many real computations rely on mathematical functions that are surprisingly expensive to prove.
Our newest research paper, “Efficiently Provable Approximations for Non-Polynomial Functions” (Sridhar, Srinivasan, Papadopoulos, Papamanthou), directly tackles the obstacle of making complex math easy to prove. The research introduces a general framework that makes a large class of “hard” functions both accurate and efficient to prove inside zk-SNARKs, with strong empirical results in a production proof stack (Noir and Barretenberg).
What are “Non-Polynomial Functions” and Why Are They Difficult to Prove?
Many systems depend on functions like:
- trigonometric functions (sin, cos, tan) used in navigation, physics, and coordinate transforms
- exponentials and logs used in finance and probability
- activation functions like sigmoid and GeLU used in machine learning
- non-elementary functions like erf used in statistics and scientific computing
These are called non-polynomial functions because you cannot write them as a simple polynomial like anxn + an-1xn-1+ … + a0. ZK-SNARKs are extremely good at proving polynomial-style arithmetic, however, they are much less efficient when asked to prove “real number math” that looks natural in normal software. Directly proving floating point operations inside a SNARK is expensive and can require hundreds to thousands of constraints for basic operations. This is because most proof systems operate over a finite field (think “a special kind of integer math with wraparound”), whereas real-world software uses floating point numbers (like what your computer uses to compute decimals).
In practice, many systems represent real numbers as fixed-point numbers using quantization (for example, store 3.1415 as an integer that represents “31415 with an implied decimal position”) for practicality. This is fast and deterministic, but now you need to approximate non-polynomial functions efficiently and accurately in this fixed-point world.
Today’s ZK-Systems (and How We’re Improving Them)
There are three common approaches used in ZK systems today:
- Lookup tables: Precompute a big table of outputs and prove you looked up the right entry. This can be accurate, but tables get huge for large input ranges. For 32-bit domains, tables can become too large to store or prove efficiently.
- Piecewise approximations: Break the input into many segments and approximate each segment with a line. This can approximate almost anything, but higher accuracy means many segments and higher proving cost.
- Taylor series and polynomials: Approximate the function with a polynomial expansion. This can be efficient near the approximation point, but high accuracy often requires higher-degree polynomials, which increases multiplications and worsens error behavior in quantized arithmetic.
Here is the key point: in fixed-point arithmetic, repeated multiplications tend to amplify rounding error, which makes accuracy loss is often tied to how much multiplicative “stacking” the circuit does.
Key Breakthrough: Depth Matters
The research introduces a framework designed around a critical practical constraint: multiplicative depth.
- Multiplicative depth is a circuit property describing how many layers of multiplications depend on each other.
- In quantized computation, deeper multiplication chains amplify rounding error.
- Many existing approximation techniques increase depth when you push for higher accuracy.
Our main contribution is that it can increase accuracy largely by increasing the number of terms in a sum, rather than increasing multiplication depth. This matters because errors do not compound through addition the way they do through repeated multiplication, so you get high accuracy without the same penalty.
Two New Techniques: Padé and Gauss–Legendre Quadrature
Technique 1: Padé approximants
A Padé approximant is a rational approximation, meaning it represents a function as: polynomial / polynomial. Our research shows Padé can reach the same target accuracy as Taylor series while using lower-degree polynomials, which usually means fewer depth-expensive multiplications. In experiments, the Padé approach often delivered the best overall performance and accuracy tradeoff.
Technique 2: Gauss–Legendre Quadrature
Quadrature is a numerical integration method that approximates an integral as a weighted sum of function values at specific points. The key property of Gauss–Legendre quadrature is that it gives very high accuracy with structured weights and points. The research shows how many non-polynomial functions can be rewritten in a form where proving them reduces to proving a weighted summation plus a set of inverse checks. Critically, increasing accuracy increases the number of summation terms, but does not increase multiplicative depth beyond a small constant (reported ≤ 4). The result is a proof circuit that stays shallow, which helps preserve accuracy even when these functions are used repeatedly (like many activation layers in ML).
We implemented both techniques in Noir with the Barretenberg backend (UltraPLONK), which is production-grade ZK tooling. We evaluated eight functions across four categories (trig, activation, power, non-elementary) and compared them against lookup tables, piecewise linear, and Taylor series.
The key outcomes were:
- Gauss–Legendre quadrature achieved up to 8 extra bits of accuracy, translating to up to 256× lower absolute error in larger quantization domains.
- Padé often achieved 2–20× better prover performance while remaining comparable in accuracy loss, and generally outperformed baseline numerical approximations.
- Verifier time stayed in the low milliseconds range, and proof sizes stayed within a narrow range, because they used the same underlying SNARK backend for all methods.
We also validated the approach in two applications where accuracy matters:
- Orbital mechanics (proving correctness of computation related to Kepler’s equation)
- DeFi pricing (verifying the Bancor pricing function involving fractional powers)
In both cases, we achieved significant accuracy gains in large quantization domains, with performance comparable to baselines.
Implications for Cryptography and Verifiable AI
- Broader expressiveness for ZK systems
- This work expands the set of computations that zk-SNARKs can prove efficiently, especially when real-number math is involved. Many real applications become much more feasible when non-polynomial functions are no longer a performance cliff.
- Better foundations for verifiable machine learning
- Machine learning models are full of non-polynomial operations: sigmoid, GeLU, softmax, tanh, exp, normalization, and more. Efficiently proving these functions with high accuracy is directly relevant to zkML. This research provides new building blocks for proving ML pipelines without forcing unacceptable accuracy loss.
- A clearer path toward proof-friendly numerical computing
- Cryptography is increasingly intersecting with scientific and engineering workloads. Our research’s core perspective, that depth matters and additive accumulation is safer for error, suggests a direction for how future proof-friendly numerics should be constructed.
What This Means for Defense
Many defense and aerospace systems rely on precisely the kinds of functions addressed in this paper:
- Trigonometric functions for navigation and coordinate transforms
- Exponentials and non-linear activations in ML models
- Statistical functions used in detection and filtering
- High-precision numerical workflows where small error compounds over time
Efficient proofs strengthen the underlying cryptographic and numerical foundations required for verifiable AI in mission-critical settings. It makes “proof of correct computation” more viable for the real math these systems depend on. By introducing Gauss–Legendre quadrature and Padé approximants into zk-SNARK-friendly approximations, the work improves both performance and precision compared to common baselines, and demonstrates the result in real application benchmarks.
In order to apply verifiable AI to operational infrastructure, we must be able to prove the math real systems use. Our latest research marks a concrete step in that direction, advancing cryptography for the industries that need it most.


