Using Noise to Infer Aspects of Simplicity Without Learning

Noise in data significantly influences decision-making in the data science process. In fact, it has been shown that noise in data generation processes motivates practitioners to choose simpler hypothesis spaces. However, an open question still remains: what is the degree of model simplification we can expect under different noise levels? In this work, we address this question by investigating the relationship between the amount of noise and model simplicity across various hypothesis spaces, focusing on decision trees and linear models. We formally show that noise acts as an implicit regularizer for different noise models. Furthermore, we prove that Rashomon sets (sets of near-optimal models) constructed with noisy data tend to contain simpler models than corresponding Rashomon sets with non-noisy data. Additionally, we show that noise expands the set of “good” features, and increases the set of models that use at least one good feature. Our work offers theoretical guarantees and practical insights for both practitioners and policymakers on whether simple-yet-accurate machine learning models are likely to exist, based on knowledge of noise levels in the data generation process.