Zephyrnet Logo

BDD-Based Formal for Floating Point. Innovation in Verification – Semiwiki

Date:

A different approach to formally verifying very challenging datapath functions. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. We’re planning to add a wrinkle to our verification exploration this year. Details to follow!

BDD-Based Formal for Floating Point. Innovation in Verification

The Innovation

This month’s pick is Polynomial Formal Verification of Floating-Point Adders. This article was published in the 2023 DATE Conference. The authors are from the University of Bremen, Germany.

Datapath element implementations must be proved absolutely correct (remember the infamous Pentium floating point bug), which demands formal proofs. Yet BDD state graphs for floating point elements rapidly explode, while SAT proofs are often bounded hence not truly complete.

The popular workaround today is to use equivalence checking with a C/C++ reference model, which works very well but of course depends on a trusted reference. However some brave souls are still trying to find a path with BDD. These authors suggest methods to use case-splitting to limit state graph explosion, dropping from exponential to polynomial bounded complexity. Let’s see what our reviewers think!

Paul’s view

Compact easy read paper to kick-off of 2024, and on a classic problem in computer science: managing BDD size explosion in formal verification.

The key contribution of the paper is a new method for “case splitting” in formal verification of floating point adders. Traditionally, case splitting means to pick a boolean variable that causes a BDD to blow up in size, and just run two separate formal proofs, one for the “case” where that variable is true and one for the case where that variable is false. If both proofs pass, then it means that the overall proof for the full BDD including that variable must necessarily also pass. Of course, case splitting for n variables means 2^n cases so if you use it everywhere then you just trade one exponential blow up for another.

This paper observes that case splitting need not be based only on individual Boolean variables. Any exhaustive sub-division of problem is valid. For example, prior to normalizing the base-exponent, a case split on the number of leading zeros in the base can be performed – i.e. zero leading zeros in the base, one leading zero in the base, and so on. This particular choice of split combined with one other cunning split in the alignment shift step achieves a magical compromise such that the overall proof for a floating point add goes from being exponential to polynomial in complexity. A double precision floating point add circuit can now be formally proved correct in 10 seconds. Nice!

Raúl’s view

This short paper introduces a novel approach to managing the size explosion problem in formal verification of floating point adders using BDDs, a classic issue in equivalence checking. Traditionally, this is addressed by case splitting, i.e., dividing the problem based on the values of individual Boolean variables (0, 1), also leading to exponential growth in complexity with the number of variables split. Based on observations on where the explosion in size happens when constructing the BDDs, the paper proposes three innovative case splitting methods. They are not based on individual Boolean variables and are specific for floating point adders (of course they do not simplify general equivalence checking to P).

  1. Alignment Shift Case Splitting: The paper suggests splitting with regard to the shift amount or exponent difference, significantly reducing the number of cases needed for verification.
  2. Leading Zero Case Splitting: To address the explosion at the normalization shift, the paper proposes creating cases based on the number of leading zeros in the addition result.
  3. Subnormal Numbers and Rounding: Subnormal numbers are handled by adding a simplification in cases where they can occur; rounding does not trigger an explosion in BDD size.

By strategically choosing these case splits, the overall proof complexity for floating point addition can be reduced from exponential to polynomial. As a result, formal verification of double and quadruple precision floating point add circuits, which in classic symbolic simulation time out at two hours, can now be completed in 10-300 secs!

Share this post via:

spot_img

Latest Intelligence

spot_img