Developing Bug-Free Machine Learning Systems with Formal Mathematics

  • Daniel Selsam ,
  • Percy Liang ,
  • David L. Dill

ICML 2017 |

Published by PMLR

Publication

Noisy data, non-convex objectives, model misspecification, and numerical instability can all cause undesired behaviors in machine learning systems. As a result, detecting actual implementation errors can be extremely difficult. We demonstrate a methodology in which developers use an interactive proof assistant to both implement their system and to state a formal theorem defining what it means for their system to be correct. The process of proving this theorem interactively in the proof assistant exposes all implementation errors since any error in the program would cause the proof to fail. As a case study, we implement a new system, Certigrad, for optimizing over stochastic computation graphs, and we generate a formal (i.e. machine-checkable) proof that the gradients sampled by the system are unbiased estimates of the true mathematical gradients. We train a variational autoencoder using Certigrad and find the performance comparable to training the same model in TensorFlow.

论文与出版物下载

Microsoft TensorFlow2 GNN Library

28 2 月, 2020

TensorFlow 2 library implementing Graph Neural Networks.