Palestrante
Eduardo Sakabe
(UNICAMP)
Descrição
We introduce the Algorithmic Inference Benchmark (AIB), a framework designed to distinguish whether learning systems capture the generative mechanisms underlying data or merely fit statistical regularities. AIB constructs synthetic datasets using known rules, allowing independent control over algorithmic and statistical difficulty. By manipulating the generative rule and sampling process, AIB enables controlled experiments that reveal whether models rely on algorithmic abduction or statistical prediction. This conceptual work outlines the design principles of AIB and motivates its use for developing learning systems with stronger algorithmic priors.
Autor
Eduardo Sakabe
(UNICAMP)